00:00:00.001 Started by upstream project "autotest-per-patch" build number 130900 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.294 > git --version # 'git version 2.39.2' 00:00:00.294 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.330 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.330 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.092 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.104 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.115 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:07.115 > git config core.sparsecheckout # timeout=10 00:00:07.126 > git read-tree -mu HEAD # timeout=10 00:00:07.141 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:07.157 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:07.157 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:07.243 [Pipeline] Start of Pipeline 00:00:07.253 [Pipeline] library 00:00:07.254 Loading library shm_lib@master 00:00:07.254 Library shm_lib@master is cached. Copying from home. 00:00:07.270 [Pipeline] node 00:00:07.284 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.286 [Pipeline] { 00:00:07.299 [Pipeline] catchError 00:00:07.300 [Pipeline] { 00:00:07.316 [Pipeline] wrap 00:00:07.326 [Pipeline] { 00:00:07.332 [Pipeline] stage 00:00:07.333 [Pipeline] { (Prologue) 00:00:07.349 [Pipeline] echo 00:00:07.350 Node: VM-host-SM16 00:00:07.357 [Pipeline] cleanWs 00:00:07.366 [WS-CLEANUP] Deleting project workspace... 00:00:07.366 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.371 [WS-CLEANUP] done 00:00:07.579 [Pipeline] setCustomBuildProperty 00:00:07.656 [Pipeline] httpRequest 00:00:08.044 [Pipeline] echo 00:00:08.045 Sorcerer 10.211.164.101 is alive 00:00:08.051 [Pipeline] retry 00:00:08.052 [Pipeline] { 00:00:08.061 [Pipeline] httpRequest 00:00:08.064 HttpMethod: GET 00:00:08.065 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.065 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.079 Response Code: HTTP/1.1 200 OK 00:00:08.079 Success: Status code 200 is in the accepted range: 200,404 00:00:08.080 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.694 [Pipeline] } 00:00:09.713 [Pipeline] // retry 00:00:09.721 [Pipeline] sh 00:00:10.002 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:10.015 [Pipeline] httpRequest 00:00:10.411 [Pipeline] echo 00:00:10.413 Sorcerer 10.211.164.101 is alive 00:00:10.424 [Pipeline] retry 00:00:10.426 [Pipeline] { 00:00:10.443 [Pipeline] httpRequest 00:00:10.448 HttpMethod: GET 00:00:10.449 URL: http://10.211.164.101/packages/spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:00:10.449 Sending request to url: http://10.211.164.101/packages/spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:00:10.466 Response Code: HTTP/1.1 200 OK 00:00:10.467 Success: Status code 200 is in the accepted range: 200,404 00:00:10.467 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:00:41.897 [Pipeline] } 00:00:41.915 [Pipeline] // retry 00:00:41.922 [Pipeline] sh 00:00:42.212 + tar --no-same-owner -xf spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:00:44.774 [Pipeline] sh 00:00:45.056 + git -C spdk log --oneline -n5 00:00:45.056 91fca59bc lib/reduce: unlink meta file 00:00:45.056 92108e0a2 fsdev/aio: add support for null IOs 00:00:45.056 dcdab59d3 lib/reduce: Check return code of read superblock 00:00:45.056 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:00:45.056 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:00:45.075 [Pipeline] writeFile 00:00:45.090 [Pipeline] sh 00:00:45.371 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:45.383 [Pipeline] sh 00:00:45.663 + cat autorun-spdk.conf 00:00:45.663 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.663 SPDK_TEST_NVMF=1 00:00:45.663 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.663 SPDK_TEST_URING=1 00:00:45.663 SPDK_TEST_USDT=1 00:00:45.663 SPDK_RUN_UBSAN=1 00:00:45.663 NET_TYPE=virt 00:00:45.663 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:45.670 RUN_NIGHTLY=0 00:00:45.672 [Pipeline] } 00:00:45.685 [Pipeline] // stage 00:00:45.699 [Pipeline] stage 00:00:45.702 [Pipeline] { (Run VM) 00:00:45.715 [Pipeline] sh 00:00:45.995 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:45.995 + echo 'Start stage prepare_nvme.sh' 00:00:45.995 Start stage prepare_nvme.sh 00:00:45.995 + [[ -n 4 ]] 00:00:45.995 + disk_prefix=ex4 00:00:45.995 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:45.995 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:45.995 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:45.995 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.995 ++ SPDK_TEST_NVMF=1 00:00:45.995 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.995 ++ SPDK_TEST_URING=1 00:00:45.995 ++ SPDK_TEST_USDT=1 00:00:45.995 ++ SPDK_RUN_UBSAN=1 00:00:45.995 ++ NET_TYPE=virt 00:00:45.995 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:45.995 ++ RUN_NIGHTLY=0 00:00:45.995 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:45.995 + nvme_files=() 00:00:45.995 + declare -A nvme_files 00:00:45.995 + backend_dir=/var/lib/libvirt/images/backends 00:00:45.995 + nvme_files['nvme.img']=5G 00:00:45.995 + nvme_files['nvme-cmb.img']=5G 00:00:45.995 + nvme_files['nvme-multi0.img']=4G 00:00:45.995 + nvme_files['nvme-multi1.img']=4G 00:00:45.995 + nvme_files['nvme-multi2.img']=4G 00:00:45.995 + nvme_files['nvme-openstack.img']=8G 00:00:45.995 + nvme_files['nvme-zns.img']=5G 00:00:45.995 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:45.995 + (( SPDK_TEST_FTL == 1 )) 00:00:45.995 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:45.995 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:45.995 + for nvme in "${!nvme_files[@]}" 00:00:45.995 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:45.995 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:45.995 + for nvme in "${!nvme_files[@]}" 00:00:45.995 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:46.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.254 + for nvme in "${!nvme_files[@]}" 00:00:46.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:46.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:46.254 + for nvme in "${!nvme_files[@]}" 00:00:46.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:46.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.254 + for nvme in "${!nvme_files[@]}" 00:00:46.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:46.512 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.512 + for nvme in "${!nvme_files[@]}" 00:00:46.512 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:46.770 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.770 + for nvme in "${!nvme_files[@]}" 00:00:46.770 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:47.335 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.335 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:47.335 + echo 'End stage prepare_nvme.sh' 00:00:47.335 End stage prepare_nvme.sh 00:00:47.347 [Pipeline] sh 00:00:47.656 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:47.657 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:00:47.657 00:00:47.657 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:47.657 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:47.657 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:47.657 HELP=0 00:00:47.657 DRY_RUN=0 00:00:47.657 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:47.657 NVME_DISKS_TYPE=nvme,nvme, 00:00:47.657 NVME_AUTO_CREATE=0 00:00:47.657 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:47.657 NVME_CMB=,, 00:00:47.657 NVME_PMR=,, 00:00:47.657 NVME_ZNS=,, 00:00:47.657 NVME_MS=,, 00:00:47.657 NVME_FDP=,, 00:00:47.657 SPDK_VAGRANT_DISTRO=fedora39 00:00:47.657 SPDK_VAGRANT_VMCPU=10 00:00:47.657 SPDK_VAGRANT_VMRAM=12288 00:00:47.657 SPDK_VAGRANT_PROVIDER=libvirt 00:00:47.657 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:47.657 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:47.657 SPDK_OPENSTACK_NETWORK=0 00:00:47.657 VAGRANT_PACKAGE_BOX=0 00:00:47.657 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:47.657 FORCE_DISTRO=true 00:00:47.657 VAGRANT_BOX_VERSION= 00:00:47.657 EXTRA_VAGRANTFILES= 00:00:47.657 NIC_MODEL=e1000 00:00:47.657 00:00:47.657 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:47.657 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:50.190 Bringing machine 'default' up with 'libvirt' provider... 00:00:50.755 ==> default: Creating image (snapshot of base box volume). 00:00:51.013 ==> default: Creating domain with the following settings... 00:00:51.013 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728378402_a29f000ff7d8343bfba1 00:00:51.013 ==> default: -- Domain type: kvm 00:00:51.013 ==> default: -- Cpus: 10 00:00:51.013 ==> default: -- Feature: acpi 00:00:51.013 ==> default: -- Feature: apic 00:00:51.013 ==> default: -- Feature: pae 00:00:51.013 ==> default: -- Memory: 12288M 00:00:51.013 ==> default: -- Memory Backing: hugepages: 00:00:51.013 ==> default: -- Management MAC: 00:00:51.013 ==> default: -- Loader: 00:00:51.013 ==> default: -- Nvram: 00:00:51.013 ==> default: -- Base box: spdk/fedora39 00:00:51.013 ==> default: -- Storage pool: default 00:00:51.013 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728378402_a29f000ff7d8343bfba1.img (20G) 00:00:51.013 ==> default: -- Volume Cache: default 00:00:51.013 ==> default: -- Kernel: 00:00:51.013 ==> default: -- Initrd: 00:00:51.013 ==> default: -- Graphics Type: vnc 00:00:51.013 ==> default: -- Graphics Port: -1 00:00:51.013 ==> default: -- Graphics IP: 127.0.0.1 00:00:51.013 ==> default: -- Graphics Password: Not defined 00:00:51.013 ==> default: -- Video Type: cirrus 00:00:51.013 ==> default: -- Video VRAM: 9216 00:00:51.013 ==> default: -- Sound Type: 00:00:51.013 ==> default: -- Keymap: en-us 00:00:51.013 ==> default: -- TPM Path: 00:00:51.013 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:51.013 ==> default: -- Command line args: 00:00:51.013 ==> default: -> value=-device, 00:00:51.013 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:51.013 ==> default: -> value=-drive, 00:00:51.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:51.013 ==> default: -> value=-device, 00:00:51.013 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.013 ==> default: -> value=-device, 00:00:51.013 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:51.013 ==> default: -> value=-drive, 00:00:51.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:51.013 ==> default: -> value=-device, 00:00:51.013 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.013 ==> default: -> value=-drive, 00:00:51.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:51.013 ==> default: -> value=-device, 00:00:51.013 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.013 ==> default: -> value=-drive, 00:00:51.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:51.013 ==> default: -> value=-device, 00:00:51.013 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:51.013 ==> default: Creating shared folders metadata... 00:00:51.013 ==> default: Starting domain. 00:00:52.389 ==> default: Waiting for domain to get an IP address... 00:01:10.516 ==> default: Waiting for SSH to become available... 00:01:10.516 ==> default: Configuring and enabling network interfaces... 00:01:13.800 default: SSH address: 192.168.121.6:22 00:01:13.800 default: SSH username: vagrant 00:01:13.800 default: SSH auth method: private key 00:01:15.702 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:23.846 ==> default: Mounting SSHFS shared folder... 00:01:24.782 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:24.782 ==> default: Checking Mount.. 00:01:26.187 ==> default: Folder Successfully Mounted! 00:01:26.187 ==> default: Running provisioner: file... 00:01:27.121 default: ~/.gitconfig => .gitconfig 00:01:27.380 00:01:27.380 SUCCESS! 00:01:27.380 00:01:27.380 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:27.380 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:27.380 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:27.380 00:01:27.389 [Pipeline] } 00:01:27.404 [Pipeline] // stage 00:01:27.414 [Pipeline] dir 00:01:27.414 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:27.416 [Pipeline] { 00:01:27.428 [Pipeline] catchError 00:01:27.430 [Pipeline] { 00:01:27.442 [Pipeline] sh 00:01:27.721 + vagrant ssh-config --host vagrant 00:01:27.721 + sed -ne /^Host/,$p 00:01:27.721 + tee ssh_conf 00:01:31.008 Host vagrant 00:01:31.008 HostName 192.168.121.6 00:01:31.008 User vagrant 00:01:31.008 Port 22 00:01:31.008 UserKnownHostsFile /dev/null 00:01:31.008 StrictHostKeyChecking no 00:01:31.008 PasswordAuthentication no 00:01:31.008 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:31.008 IdentitiesOnly yes 00:01:31.008 LogLevel FATAL 00:01:31.008 ForwardAgent yes 00:01:31.008 ForwardX11 yes 00:01:31.008 00:01:31.022 [Pipeline] withEnv 00:01:31.024 [Pipeline] { 00:01:31.038 [Pipeline] sh 00:01:31.318 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:31.318 source /etc/os-release 00:01:31.318 [[ -e /image.version ]] && img=$(< /image.version) 00:01:31.318 # Minimal, systemd-like check. 00:01:31.318 if [[ -e /.dockerenv ]]; then 00:01:31.318 # Clear garbage from the node's name: 00:01:31.318 # agt-er_autotest_547-896 -> autotest_547-896 00:01:31.318 # $HOSTNAME is the actual container id 00:01:31.318 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:31.318 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:31.318 # We can assume this is a mount from a host where container is running, 00:01:31.318 # so fetch its hostname to easily identify the target swarm worker. 00:01:31.318 container="$(< /etc/hostname) ($agent)" 00:01:31.318 else 00:01:31.318 # Fallback 00:01:31.318 container=$agent 00:01:31.318 fi 00:01:31.318 fi 00:01:31.318 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:31.318 00:01:31.329 [Pipeline] } 00:01:31.344 [Pipeline] // withEnv 00:01:31.352 [Pipeline] setCustomBuildProperty 00:01:31.367 [Pipeline] stage 00:01:31.369 [Pipeline] { (Tests) 00:01:31.386 [Pipeline] sh 00:01:31.666 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:31.939 [Pipeline] sh 00:01:32.220 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:32.493 [Pipeline] timeout 00:01:32.493 Timeout set to expire in 1 hr 0 min 00:01:32.495 [Pipeline] { 00:01:32.508 [Pipeline] sh 00:01:32.788 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:33.354 HEAD is now at 91fca59bc lib/reduce: unlink meta file 00:01:33.366 [Pipeline] sh 00:01:33.647 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:33.918 [Pipeline] sh 00:01:34.199 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:34.215 [Pipeline] sh 00:01:34.494 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:34.754 ++ readlink -f spdk_repo 00:01:34.754 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:34.754 + [[ -n /home/vagrant/spdk_repo ]] 00:01:34.754 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:34.754 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:34.754 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:34.754 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:34.754 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:34.754 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:34.754 + cd /home/vagrant/spdk_repo 00:01:34.754 + source /etc/os-release 00:01:34.754 ++ NAME='Fedora Linux' 00:01:34.754 ++ VERSION='39 (Cloud Edition)' 00:01:34.754 ++ ID=fedora 00:01:34.754 ++ VERSION_ID=39 00:01:34.754 ++ VERSION_CODENAME= 00:01:34.754 ++ PLATFORM_ID=platform:f39 00:01:34.754 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:34.754 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.754 ++ LOGO=fedora-logo-icon 00:01:34.754 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:34.754 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.754 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:34.754 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.754 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.754 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.754 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:34.754 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.754 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:34.754 ++ SUPPORT_END=2024-11-12 00:01:34.754 ++ VARIANT='Cloud Edition' 00:01:34.754 ++ VARIANT_ID=cloud 00:01:34.754 + uname -a 00:01:34.754 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:34.754 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:35.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:35.012 Hugepages 00:01:35.012 node hugesize free / total 00:01:35.012 node0 1048576kB 0 / 0 00:01:35.012 node0 2048kB 0 / 0 00:01:35.012 00:01:35.012 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:35.271 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:35.271 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:35.271 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:35.271 + rm -f /tmp/spdk-ld-path 00:01:35.271 + source autorun-spdk.conf 00:01:35.271 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.271 ++ SPDK_TEST_NVMF=1 00:01:35.271 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.271 ++ SPDK_TEST_URING=1 00:01:35.271 ++ SPDK_TEST_USDT=1 00:01:35.271 ++ SPDK_RUN_UBSAN=1 00:01:35.271 ++ NET_TYPE=virt 00:01:35.271 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.271 ++ RUN_NIGHTLY=0 00:01:35.271 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:35.271 + [[ -n '' ]] 00:01:35.271 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:35.271 + for M in /var/spdk/build-*-manifest.txt 00:01:35.271 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:35.271 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:35.271 + for M in /var/spdk/build-*-manifest.txt 00:01:35.271 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:35.271 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:35.271 + for M in /var/spdk/build-*-manifest.txt 00:01:35.271 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:35.271 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:35.271 ++ uname 00:01:35.271 + [[ Linux == \L\i\n\u\x ]] 00:01:35.271 + sudo dmesg -T 00:01:35.271 + sudo dmesg --clear 00:01:35.271 + dmesg_pid=5371 00:01:35.271 + [[ Fedora Linux == FreeBSD ]] 00:01:35.271 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.271 + sudo dmesg -Tw 00:01:35.271 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.271 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:35.271 + [[ -x /usr/src/fio-static/fio ]] 00:01:35.271 + export FIO_BIN=/usr/src/fio-static/fio 00:01:35.271 + FIO_BIN=/usr/src/fio-static/fio 00:01:35.271 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:35.271 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:35.271 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:35.271 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.271 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.271 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:35.271 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.271 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.271 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:35.271 Test configuration: 00:01:35.271 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.271 SPDK_TEST_NVMF=1 00:01:35.271 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.271 SPDK_TEST_URING=1 00:01:35.271 SPDK_TEST_USDT=1 00:01:35.271 SPDK_RUN_UBSAN=1 00:01:35.271 NET_TYPE=virt 00:01:35.271 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.530 RUN_NIGHTLY=0 09:07:26 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:35.530 09:07:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:35.530 09:07:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:35.530 09:07:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:35.530 09:07:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.530 09:07:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.530 09:07:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.530 09:07:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.530 09:07:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.530 09:07:26 -- paths/export.sh@5 -- $ export PATH 00:01:35.530 09:07:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.531 09:07:26 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:35.531 09:07:26 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:35.531 09:07:26 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728378446.XXXXXX 00:01:35.531 09:07:26 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728378446.KeE6Ku 00:01:35.531 09:07:26 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:35.531 09:07:26 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:35.531 09:07:26 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:35.531 09:07:26 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:35.531 09:07:26 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:35.531 09:07:26 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:35.531 09:07:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:35.531 09:07:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.531 09:07:26 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:35.531 09:07:26 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:35.531 09:07:26 -- pm/common@17 -- $ local monitor 00:01:35.531 09:07:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.531 09:07:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.531 09:07:26 -- pm/common@25 -- $ sleep 1 00:01:35.531 09:07:26 -- pm/common@21 -- $ date +%s 00:01:35.531 09:07:26 -- pm/common@21 -- $ date +%s 00:01:35.531 09:07:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728378446 00:01:35.531 09:07:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728378447 00:01:35.531 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728378447_collect-vmstat.pm.log 00:01:35.531 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728378446_collect-cpu-load.pm.log 00:01:36.467 09:07:27 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:36.467 09:07:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.467 09:07:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.467 09:07:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:36.467 09:07:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.468 Tue Oct 8 09:07:28 AM UTC 2024 00:01:36.468 09:07:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.468 v25.01-pre-42-g91fca59bc 00:01:36.468 09:07:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:36.468 09:07:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.468 09:07:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.468 09:07:28 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:36.468 09:07:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:36.468 09:07:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.468 ************************************ 00:01:36.468 START TEST ubsan 00:01:36.468 ************************************ 00:01:36.468 using ubsan 00:01:36.468 09:07:28 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:36.468 00:01:36.468 real 0m0.000s 00:01:36.468 user 0m0.000s 00:01:36.468 sys 0m0.000s 00:01:36.468 09:07:28 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:36.468 ************************************ 00:01:36.468 END TEST ubsan 00:01:36.468 09:07:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:36.468 ************************************ 00:01:36.468 09:07:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:36.468 09:07:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:36.468 09:07:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:36.468 09:07:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:36.468 09:07:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:36.468 09:07:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:36.468 09:07:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:36.468 09:07:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:36.468 09:07:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:36.727 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:36.727 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:36.985 Using 'verbs' RDMA provider 00:01:50.148 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:05.084 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:05.084 Creating mk/config.mk...done. 00:02:05.084 Creating mk/cc.flags.mk...done. 00:02:05.084 Type 'make' to build. 00:02:05.084 09:07:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:05.084 09:07:55 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:05.084 09:07:55 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:05.084 09:07:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.084 ************************************ 00:02:05.084 START TEST make 00:02:05.084 ************************************ 00:02:05.084 09:07:55 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:05.084 make[1]: Nothing to be done for 'all'. 00:02:17.285 The Meson build system 00:02:17.285 Version: 1.5.0 00:02:17.285 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:17.285 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:17.285 Build type: native build 00:02:17.285 Program cat found: YES (/usr/bin/cat) 00:02:17.285 Project name: DPDK 00:02:17.285 Project version: 24.03.0 00:02:17.285 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.285 C linker for the host machine: cc ld.bfd 2.40-14 00:02:17.285 Host machine cpu family: x86_64 00:02:17.285 Host machine cpu: x86_64 00:02:17.285 Message: ## Building in Developer Mode ## 00:02:17.285 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.285 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:17.285 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.285 Program python3 found: YES (/usr/bin/python3) 00:02:17.285 Program cat found: YES (/usr/bin/cat) 00:02:17.285 Compiler for C supports arguments -march=native: YES 00:02:17.285 Checking for size of "void *" : 8 00:02:17.285 Checking for size of "void *" : 8 (cached) 00:02:17.285 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:17.285 Library m found: YES 00:02:17.285 Library numa found: YES 00:02:17.285 Has header "numaif.h" : YES 00:02:17.285 Library fdt found: NO 00:02:17.285 Library execinfo found: NO 00:02:17.285 Has header "execinfo.h" : YES 00:02:17.285 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.285 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.285 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.285 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.285 Run-time dependency openssl found: YES 3.1.1 00:02:17.285 Run-time dependency libpcap found: YES 1.10.4 00:02:17.285 Has header "pcap.h" with dependency libpcap: YES 00:02:17.285 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.285 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.285 Compiler for C supports arguments -Wformat: YES 00:02:17.285 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.285 Compiler for C supports arguments -Wformat-security: NO 00:02:17.285 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.285 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.285 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.285 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.285 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.285 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.285 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.285 Compiler for C supports arguments -Wundef: YES 00:02:17.285 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.285 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.285 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.285 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.285 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.285 Program objdump found: YES (/usr/bin/objdump) 00:02:17.285 Compiler for C supports arguments -mavx512f: YES 00:02:17.285 Checking if "AVX512 checking" compiles: YES 00:02:17.285 Fetching value of define "__SSE4_2__" : 1 00:02:17.285 Fetching value of define "__AES__" : 1 00:02:17.285 Fetching value of define "__AVX__" : 1 00:02:17.285 Fetching value of define "__AVX2__" : 1 00:02:17.285 Fetching value of define "__AVX512BW__" : (undefined) 00:02:17.285 Fetching value of define "__AVX512CD__" : (undefined) 00:02:17.285 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:17.285 Fetching value of define "__AVX512F__" : (undefined) 00:02:17.285 Fetching value of define "__AVX512VL__" : (undefined) 00:02:17.285 Fetching value of define "__PCLMUL__" : 1 00:02:17.285 Fetching value of define "__RDRND__" : 1 00:02:17.285 Fetching value of define "__RDSEED__" : 1 00:02:17.285 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.285 Fetching value of define "__znver1__" : (undefined) 00:02:17.285 Fetching value of define "__znver2__" : (undefined) 00:02:17.285 Fetching value of define "__znver3__" : (undefined) 00:02:17.285 Fetching value of define "__znver4__" : (undefined) 00:02:17.285 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.285 Message: lib/log: Defining dependency "log" 00:02:17.285 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.285 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.285 Checking for function "getentropy" : NO 00:02:17.285 Message: lib/eal: Defining dependency "eal" 00:02:17.285 Message: lib/ring: Defining dependency "ring" 00:02:17.285 Message: lib/rcu: Defining dependency "rcu" 00:02:17.285 Message: lib/mempool: Defining dependency "mempool" 00:02:17.285 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.285 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.285 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.285 Compiler for C supports arguments -mpclmul: YES 00:02:17.285 Compiler for C supports arguments -maes: YES 00:02:17.285 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.285 Compiler for C supports arguments -mavx512bw: YES 00:02:17.285 Compiler for C supports arguments -mavx512dq: YES 00:02:17.285 Compiler for C supports arguments -mavx512vl: YES 00:02:17.285 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.285 Compiler for C supports arguments -mavx2: YES 00:02:17.285 Compiler for C supports arguments -mavx: YES 00:02:17.285 Message: lib/net: Defining dependency "net" 00:02:17.285 Message: lib/meter: Defining dependency "meter" 00:02:17.285 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.285 Message: lib/pci: Defining dependency "pci" 00:02:17.285 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.285 Message: lib/hash: Defining dependency "hash" 00:02:17.285 Message: lib/timer: Defining dependency "timer" 00:02:17.285 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.285 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.285 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.285 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.285 Message: lib/power: Defining dependency "power" 00:02:17.285 Message: lib/reorder: Defining dependency "reorder" 00:02:17.285 Message: lib/security: Defining dependency "security" 00:02:17.285 Has header "linux/userfaultfd.h" : YES 00:02:17.285 Has header "linux/vduse.h" : YES 00:02:17.285 Message: lib/vhost: Defining dependency "vhost" 00:02:17.285 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.285 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.285 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.285 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.285 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.285 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.285 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.285 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.285 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.285 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.285 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.285 Configuring doxy-api-html.conf using configuration 00:02:17.285 Configuring doxy-api-man.conf using configuration 00:02:17.285 Program mandb found: YES (/usr/bin/mandb) 00:02:17.285 Program sphinx-build found: NO 00:02:17.285 Configuring rte_build_config.h using configuration 00:02:17.285 Message: 00:02:17.285 ================= 00:02:17.285 Applications Enabled 00:02:17.285 ================= 00:02:17.285 00:02:17.285 apps: 00:02:17.285 00:02:17.285 00:02:17.285 Message: 00:02:17.285 ================= 00:02:17.285 Libraries Enabled 00:02:17.285 ================= 00:02:17.285 00:02:17.285 libs: 00:02:17.285 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.285 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.285 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.285 00:02:17.285 Message: 00:02:17.285 =============== 00:02:17.285 Drivers Enabled 00:02:17.285 =============== 00:02:17.285 00:02:17.286 common: 00:02:17.286 00:02:17.286 bus: 00:02:17.286 pci, vdev, 00:02:17.286 mempool: 00:02:17.286 ring, 00:02:17.286 dma: 00:02:17.286 00:02:17.286 net: 00:02:17.286 00:02:17.286 crypto: 00:02:17.286 00:02:17.286 compress: 00:02:17.286 00:02:17.286 vdpa: 00:02:17.286 00:02:17.286 00:02:17.286 Message: 00:02:17.286 ================= 00:02:17.286 Content Skipped 00:02:17.286 ================= 00:02:17.286 00:02:17.286 apps: 00:02:17.286 dumpcap: explicitly disabled via build config 00:02:17.286 graph: explicitly disabled via build config 00:02:17.286 pdump: explicitly disabled via build config 00:02:17.286 proc-info: explicitly disabled via build config 00:02:17.286 test-acl: explicitly disabled via build config 00:02:17.286 test-bbdev: explicitly disabled via build config 00:02:17.286 test-cmdline: explicitly disabled via build config 00:02:17.286 test-compress-perf: explicitly disabled via build config 00:02:17.286 test-crypto-perf: explicitly disabled via build config 00:02:17.286 test-dma-perf: explicitly disabled via build config 00:02:17.286 test-eventdev: explicitly disabled via build config 00:02:17.286 test-fib: explicitly disabled via build config 00:02:17.286 test-flow-perf: explicitly disabled via build config 00:02:17.286 test-gpudev: explicitly disabled via build config 00:02:17.286 test-mldev: explicitly disabled via build config 00:02:17.286 test-pipeline: explicitly disabled via build config 00:02:17.286 test-pmd: explicitly disabled via build config 00:02:17.286 test-regex: explicitly disabled via build config 00:02:17.286 test-sad: explicitly disabled via build config 00:02:17.286 test-security-perf: explicitly disabled via build config 00:02:17.286 00:02:17.286 libs: 00:02:17.286 argparse: explicitly disabled via build config 00:02:17.286 metrics: explicitly disabled via build config 00:02:17.286 acl: explicitly disabled via build config 00:02:17.286 bbdev: explicitly disabled via build config 00:02:17.286 bitratestats: explicitly disabled via build config 00:02:17.286 bpf: explicitly disabled via build config 00:02:17.286 cfgfile: explicitly disabled via build config 00:02:17.286 distributor: explicitly disabled via build config 00:02:17.286 efd: explicitly disabled via build config 00:02:17.286 eventdev: explicitly disabled via build config 00:02:17.286 dispatcher: explicitly disabled via build config 00:02:17.286 gpudev: explicitly disabled via build config 00:02:17.286 gro: explicitly disabled via build config 00:02:17.286 gso: explicitly disabled via build config 00:02:17.286 ip_frag: explicitly disabled via build config 00:02:17.286 jobstats: explicitly disabled via build config 00:02:17.286 latencystats: explicitly disabled via build config 00:02:17.286 lpm: explicitly disabled via build config 00:02:17.286 member: explicitly disabled via build config 00:02:17.286 pcapng: explicitly disabled via build config 00:02:17.286 rawdev: explicitly disabled via build config 00:02:17.286 regexdev: explicitly disabled via build config 00:02:17.286 mldev: explicitly disabled via build config 00:02:17.286 rib: explicitly disabled via build config 00:02:17.286 sched: explicitly disabled via build config 00:02:17.286 stack: explicitly disabled via build config 00:02:17.286 ipsec: explicitly disabled via build config 00:02:17.286 pdcp: explicitly disabled via build config 00:02:17.286 fib: explicitly disabled via build config 00:02:17.286 port: explicitly disabled via build config 00:02:17.286 pdump: explicitly disabled via build config 00:02:17.286 table: explicitly disabled via build config 00:02:17.286 pipeline: explicitly disabled via build config 00:02:17.286 graph: explicitly disabled via build config 00:02:17.286 node: explicitly disabled via build config 00:02:17.286 00:02:17.286 drivers: 00:02:17.286 common/cpt: not in enabled drivers build config 00:02:17.286 common/dpaax: not in enabled drivers build config 00:02:17.286 common/iavf: not in enabled drivers build config 00:02:17.286 common/idpf: not in enabled drivers build config 00:02:17.286 common/ionic: not in enabled drivers build config 00:02:17.286 common/mvep: not in enabled drivers build config 00:02:17.286 common/octeontx: not in enabled drivers build config 00:02:17.286 bus/auxiliary: not in enabled drivers build config 00:02:17.286 bus/cdx: not in enabled drivers build config 00:02:17.286 bus/dpaa: not in enabled drivers build config 00:02:17.286 bus/fslmc: not in enabled drivers build config 00:02:17.286 bus/ifpga: not in enabled drivers build config 00:02:17.286 bus/platform: not in enabled drivers build config 00:02:17.286 bus/uacce: not in enabled drivers build config 00:02:17.286 bus/vmbus: not in enabled drivers build config 00:02:17.286 common/cnxk: not in enabled drivers build config 00:02:17.286 common/mlx5: not in enabled drivers build config 00:02:17.286 common/nfp: not in enabled drivers build config 00:02:17.286 common/nitrox: not in enabled drivers build config 00:02:17.286 common/qat: not in enabled drivers build config 00:02:17.286 common/sfc_efx: not in enabled drivers build config 00:02:17.286 mempool/bucket: not in enabled drivers build config 00:02:17.286 mempool/cnxk: not in enabled drivers build config 00:02:17.286 mempool/dpaa: not in enabled drivers build config 00:02:17.286 mempool/dpaa2: not in enabled drivers build config 00:02:17.286 mempool/octeontx: not in enabled drivers build config 00:02:17.286 mempool/stack: not in enabled drivers build config 00:02:17.286 dma/cnxk: not in enabled drivers build config 00:02:17.286 dma/dpaa: not in enabled drivers build config 00:02:17.286 dma/dpaa2: not in enabled drivers build config 00:02:17.286 dma/hisilicon: not in enabled drivers build config 00:02:17.286 dma/idxd: not in enabled drivers build config 00:02:17.286 dma/ioat: not in enabled drivers build config 00:02:17.286 dma/skeleton: not in enabled drivers build config 00:02:17.286 net/af_packet: not in enabled drivers build config 00:02:17.286 net/af_xdp: not in enabled drivers build config 00:02:17.286 net/ark: not in enabled drivers build config 00:02:17.286 net/atlantic: not in enabled drivers build config 00:02:17.286 net/avp: not in enabled drivers build config 00:02:17.286 net/axgbe: not in enabled drivers build config 00:02:17.286 net/bnx2x: not in enabled drivers build config 00:02:17.286 net/bnxt: not in enabled drivers build config 00:02:17.286 net/bonding: not in enabled drivers build config 00:02:17.286 net/cnxk: not in enabled drivers build config 00:02:17.286 net/cpfl: not in enabled drivers build config 00:02:17.286 net/cxgbe: not in enabled drivers build config 00:02:17.286 net/dpaa: not in enabled drivers build config 00:02:17.286 net/dpaa2: not in enabled drivers build config 00:02:17.286 net/e1000: not in enabled drivers build config 00:02:17.286 net/ena: not in enabled drivers build config 00:02:17.286 net/enetc: not in enabled drivers build config 00:02:17.286 net/enetfec: not in enabled drivers build config 00:02:17.286 net/enic: not in enabled drivers build config 00:02:17.286 net/failsafe: not in enabled drivers build config 00:02:17.286 net/fm10k: not in enabled drivers build config 00:02:17.286 net/gve: not in enabled drivers build config 00:02:17.286 net/hinic: not in enabled drivers build config 00:02:17.286 net/hns3: not in enabled drivers build config 00:02:17.286 net/i40e: not in enabled drivers build config 00:02:17.286 net/iavf: not in enabled drivers build config 00:02:17.286 net/ice: not in enabled drivers build config 00:02:17.286 net/idpf: not in enabled drivers build config 00:02:17.286 net/igc: not in enabled drivers build config 00:02:17.286 net/ionic: not in enabled drivers build config 00:02:17.286 net/ipn3ke: not in enabled drivers build config 00:02:17.286 net/ixgbe: not in enabled drivers build config 00:02:17.286 net/mana: not in enabled drivers build config 00:02:17.286 net/memif: not in enabled drivers build config 00:02:17.286 net/mlx4: not in enabled drivers build config 00:02:17.286 net/mlx5: not in enabled drivers build config 00:02:17.286 net/mvneta: not in enabled drivers build config 00:02:17.286 net/mvpp2: not in enabled drivers build config 00:02:17.286 net/netvsc: not in enabled drivers build config 00:02:17.286 net/nfb: not in enabled drivers build config 00:02:17.286 net/nfp: not in enabled drivers build config 00:02:17.286 net/ngbe: not in enabled drivers build config 00:02:17.286 net/null: not in enabled drivers build config 00:02:17.286 net/octeontx: not in enabled drivers build config 00:02:17.286 net/octeon_ep: not in enabled drivers build config 00:02:17.286 net/pcap: not in enabled drivers build config 00:02:17.286 net/pfe: not in enabled drivers build config 00:02:17.286 net/qede: not in enabled drivers build config 00:02:17.286 net/ring: not in enabled drivers build config 00:02:17.286 net/sfc: not in enabled drivers build config 00:02:17.286 net/softnic: not in enabled drivers build config 00:02:17.286 net/tap: not in enabled drivers build config 00:02:17.286 net/thunderx: not in enabled drivers build config 00:02:17.286 net/txgbe: not in enabled drivers build config 00:02:17.286 net/vdev_netvsc: not in enabled drivers build config 00:02:17.286 net/vhost: not in enabled drivers build config 00:02:17.286 net/virtio: not in enabled drivers build config 00:02:17.286 net/vmxnet3: not in enabled drivers build config 00:02:17.286 raw/*: missing internal dependency, "rawdev" 00:02:17.286 crypto/armv8: not in enabled drivers build config 00:02:17.287 crypto/bcmfs: not in enabled drivers build config 00:02:17.287 crypto/caam_jr: not in enabled drivers build config 00:02:17.287 crypto/ccp: not in enabled drivers build config 00:02:17.287 crypto/cnxk: not in enabled drivers build config 00:02:17.287 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.287 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.287 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.287 crypto/mlx5: not in enabled drivers build config 00:02:17.287 crypto/mvsam: not in enabled drivers build config 00:02:17.287 crypto/nitrox: not in enabled drivers build config 00:02:17.287 crypto/null: not in enabled drivers build config 00:02:17.287 crypto/octeontx: not in enabled drivers build config 00:02:17.287 crypto/openssl: not in enabled drivers build config 00:02:17.287 crypto/scheduler: not in enabled drivers build config 00:02:17.287 crypto/uadk: not in enabled drivers build config 00:02:17.287 crypto/virtio: not in enabled drivers build config 00:02:17.287 compress/isal: not in enabled drivers build config 00:02:17.287 compress/mlx5: not in enabled drivers build config 00:02:17.287 compress/nitrox: not in enabled drivers build config 00:02:17.287 compress/octeontx: not in enabled drivers build config 00:02:17.287 compress/zlib: not in enabled drivers build config 00:02:17.287 regex/*: missing internal dependency, "regexdev" 00:02:17.287 ml/*: missing internal dependency, "mldev" 00:02:17.287 vdpa/ifc: not in enabled drivers build config 00:02:17.287 vdpa/mlx5: not in enabled drivers build config 00:02:17.287 vdpa/nfp: not in enabled drivers build config 00:02:17.287 vdpa/sfc: not in enabled drivers build config 00:02:17.287 event/*: missing internal dependency, "eventdev" 00:02:17.287 baseband/*: missing internal dependency, "bbdev" 00:02:17.287 gpu/*: missing internal dependency, "gpudev" 00:02:17.287 00:02:17.287 00:02:17.287 Build targets in project: 85 00:02:17.287 00:02:17.287 DPDK 24.03.0 00:02:17.287 00:02:17.287 User defined options 00:02:17.287 buildtype : debug 00:02:17.287 default_library : shared 00:02:17.287 libdir : lib 00:02:17.287 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:17.287 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:17.287 c_link_args : 00:02:17.287 cpu_instruction_set: native 00:02:17.287 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:17.287 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:17.287 enable_docs : false 00:02:17.287 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:17.287 enable_kmods : false 00:02:17.287 max_lcores : 128 00:02:17.287 tests : false 00:02:17.287 00:02:17.287 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.287 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:17.287 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.287 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.287 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.287 [4/268] Linking static target lib/librte_kvargs.a 00:02:17.287 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.287 [6/268] Linking static target lib/librte_log.a 00:02:17.853 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.853 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.853 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.113 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.113 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.113 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.113 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.394 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.394 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.394 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.394 [17/268] Linking target lib/librte_log.so.24.1 00:02:18.394 [18/268] Linking static target lib/librte_telemetry.a 00:02:18.394 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.394 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.660 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:18.660 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:18.919 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.919 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.919 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.919 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:18.919 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.919 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.178 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.178 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.178 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:19.437 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.437 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.437 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.437 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:19.437 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.696 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.955 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.955 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.955 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.955 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.955 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.955 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.955 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.955 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.214 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.473 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.473 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.473 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.473 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.732 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.991 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.991 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.991 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.991 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.991 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.251 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.251 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.251 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.510 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.510 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.510 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.769 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.769 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.027 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:22.027 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.027 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:22.286 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.286 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.286 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.286 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.545 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.545 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.545 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.545 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.545 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.545 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.804 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.804 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.064 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:23.064 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.064 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.064 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.323 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.323 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.323 [86/268] Linking static target lib/librte_eal.a 00:02:23.582 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.582 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:23.582 [89/268] Linking static target lib/librte_rcu.a 00:02:23.582 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:23.582 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:23.582 [92/268] Linking static target lib/librte_mempool.a 00:02:23.841 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.841 [94/268] Linking static target lib/librte_ring.a 00:02:23.841 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:23.841 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.101 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.101 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.101 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.101 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.101 [101/268] Linking static target lib/librte_mbuf.a 00:02:24.101 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.362 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.362 [104/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.362 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.620 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.620 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.620 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.620 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.620 [110/268] Linking static target lib/librte_net.a 00:02:24.620 [111/268] Linking static target lib/librte_meter.a 00:02:24.879 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.137 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.138 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.138 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.138 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.138 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.138 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.138 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:25.752 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.752 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.011 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.011 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.270 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.270 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.270 [126/268] Linking static target lib/librte_pci.a 00:02:26.270 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.529 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.529 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.529 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.529 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.529 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.529 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.796 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.796 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.796 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.796 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.796 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:26.796 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.796 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.796 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.796 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.056 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:27.056 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:27.056 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.315 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.315 [147/268] Linking static target lib/librte_ethdev.a 00:02:27.315 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.315 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.315 [150/268] Linking static target lib/librte_cmdline.a 00:02:27.573 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.573 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.573 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.832 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.832 [155/268] Linking static target lib/librte_timer.a 00:02:27.833 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.833 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.833 [158/268] Linking static target lib/librte_hash.a 00:02:28.091 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:28.350 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:28.350 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:28.350 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:28.350 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.350 [164/268] Linking static target lib/librte_compressdev.a 00:02:28.609 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.868 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:28.868 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.868 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.868 [169/268] Linking static target lib/librte_dmadev.a 00:02:29.127 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.127 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.127 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:29.127 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.127 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:29.127 [175/268] Linking static target lib/librte_cryptodev.a 00:02:29.127 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.386 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.645 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.645 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:29.645 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:29.904 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:29.904 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:29.904 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:29.904 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.471 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.471 [186/268] Linking static target lib/librte_power.a 00:02:30.471 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.471 [188/268] Linking static target lib/librte_reorder.a 00:02:30.471 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:30.471 [190/268] Linking static target lib/librte_security.a 00:02:30.730 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.730 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.730 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.988 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.988 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.258 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.524 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.783 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.783 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.783 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:31.783 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:31.783 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.043 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.043 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.613 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.613 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.613 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.613 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.613 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.613 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.613 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.872 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.872 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.872 [214/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.872 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.872 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:32.872 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.872 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.872 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.130 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:33.130 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:33.130 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:33.130 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.130 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.130 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.130 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.130 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:33.389 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.957 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:33.957 [230/268] Linking static target lib/librte_vhost.a 00:02:34.894 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.894 [232/268] Linking target lib/librte_eal.so.24.1 00:02:35.153 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.153 [234/268] Linking target lib/librte_meter.so.24.1 00:02:35.153 [235/268] Linking target lib/librte_timer.so.24.1 00:02:35.153 [236/268] Linking target lib/librte_ring.so.24.1 00:02:35.153 [237/268] Linking target lib/librte_pci.so.24.1 00:02:35.153 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:35.153 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:35.413 [240/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.413 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.413 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.413 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.413 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.413 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:35.413 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.413 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:35.413 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.413 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.413 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.413 [251/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.672 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.672 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.672 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.672 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.672 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:35.672 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.672 [258/268] Linking target lib/librte_net.so.24.1 00:02:35.931 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.931 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.931 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.931 [262/268] Linking target lib/librte_security.so.24.1 00:02:35.931 [263/268] Linking target lib/librte_hash.so.24.1 00:02:35.931 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:36.190 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:36.190 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.190 [267/268] Linking target lib/librte_power.so.24.1 00:02:36.190 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:36.190 INFO: autodetecting backend as ninja 00:02:36.190 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:08.304 CC lib/log/log.o 00:03:08.304 CC lib/ut/ut.o 00:03:08.304 CC lib/log/log_deprecated.o 00:03:08.304 CC lib/ut_mock/mock.o 00:03:08.304 CC lib/log/log_flags.o 00:03:08.304 LIB libspdk_ut.a 00:03:08.304 SO libspdk_ut.so.2.0 00:03:08.304 LIB libspdk_log.a 00:03:08.304 SYMLINK libspdk_ut.so 00:03:08.304 SO libspdk_log.so.7.0 00:03:08.304 LIB libspdk_ut_mock.a 00:03:08.304 SO libspdk_ut_mock.so.6.0 00:03:08.304 SYMLINK libspdk_log.so 00:03:08.304 SYMLINK libspdk_ut_mock.so 00:03:08.304 CXX lib/trace_parser/trace.o 00:03:08.304 CC lib/dma/dma.o 00:03:08.304 CC lib/ioat/ioat.o 00:03:08.304 CC lib/util/base64.o 00:03:08.304 CC lib/util/bit_array.o 00:03:08.304 CC lib/util/cpuset.o 00:03:08.304 CC lib/util/crc16.o 00:03:08.304 CC lib/util/crc32.o 00:03:08.304 CC lib/util/crc32c.o 00:03:08.304 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.304 CC lib/util/crc32_ieee.o 00:03:08.304 CC lib/util/crc64.o 00:03:08.304 LIB libspdk_dma.a 00:03:08.304 CC lib/util/dif.o 00:03:08.304 CC lib/util/fd.o 00:03:08.304 SO libspdk_dma.so.5.0 00:03:08.304 CC lib/vfio_user/host/vfio_user.o 00:03:08.304 CC lib/util/fd_group.o 00:03:08.304 CC lib/util/file.o 00:03:08.304 CC lib/util/hexlify.o 00:03:08.304 SYMLINK libspdk_dma.so 00:03:08.304 CC lib/util/iov.o 00:03:08.304 CC lib/util/math.o 00:03:08.304 LIB libspdk_ioat.a 00:03:08.304 CC lib/util/net.o 00:03:08.304 SO libspdk_ioat.so.7.0 00:03:08.304 LIB libspdk_vfio_user.a 00:03:08.304 SO libspdk_vfio_user.so.5.0 00:03:08.304 CC lib/util/pipe.o 00:03:08.304 CC lib/util/strerror_tls.o 00:03:08.304 CC lib/util/string.o 00:03:08.304 SYMLINK libspdk_ioat.so 00:03:08.304 CC lib/util/uuid.o 00:03:08.304 CC lib/util/xor.o 00:03:08.304 SYMLINK libspdk_vfio_user.so 00:03:08.304 CC lib/util/md5.o 00:03:08.304 CC lib/util/zipf.o 00:03:08.304 LIB libspdk_util.a 00:03:08.304 SO libspdk_util.so.10.0 00:03:08.304 LIB libspdk_trace_parser.a 00:03:08.304 SO libspdk_trace_parser.so.6.0 00:03:08.304 SYMLINK libspdk_util.so 00:03:08.304 SYMLINK libspdk_trace_parser.so 00:03:08.304 CC lib/rdma_utils/rdma_utils.o 00:03:08.304 CC lib/rdma_provider/common.o 00:03:08.304 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.304 CC lib/conf/conf.o 00:03:08.304 CC lib/idxd/idxd.o 00:03:08.304 CC lib/json/json_parse.o 00:03:08.304 CC lib/idxd/idxd_user.o 00:03:08.304 CC lib/idxd/idxd_kernel.o 00:03:08.304 CC lib/vmd/vmd.o 00:03:08.304 CC lib/env_dpdk/env.o 00:03:08.304 CC lib/vmd/led.o 00:03:08.304 CC lib/json/json_util.o 00:03:08.304 LIB libspdk_rdma_provider.a 00:03:08.304 LIB libspdk_conf.a 00:03:08.304 CC lib/json/json_write.o 00:03:08.305 CC lib/env_dpdk/memory.o 00:03:08.305 SO libspdk_rdma_provider.so.6.0 00:03:08.305 SO libspdk_conf.so.6.0 00:03:08.305 LIB libspdk_rdma_utils.a 00:03:08.305 SO libspdk_rdma_utils.so.1.0 00:03:08.305 SYMLINK libspdk_conf.so 00:03:08.305 SYMLINK libspdk_rdma_provider.so 00:03:08.305 CC lib/env_dpdk/pci.o 00:03:08.305 CC lib/env_dpdk/init.o 00:03:08.305 CC lib/env_dpdk/threads.o 00:03:08.305 SYMLINK libspdk_rdma_utils.so 00:03:08.305 CC lib/env_dpdk/pci_ioat.o 00:03:08.305 CC lib/env_dpdk/pci_virtio.o 00:03:08.305 CC lib/env_dpdk/pci_vmd.o 00:03:08.305 CC lib/env_dpdk/pci_idxd.o 00:03:08.305 LIB libspdk_json.a 00:03:08.305 CC lib/env_dpdk/pci_event.o 00:03:08.305 LIB libspdk_idxd.a 00:03:08.305 SO libspdk_json.so.6.0 00:03:08.305 SO libspdk_idxd.so.12.1 00:03:08.305 SYMLINK libspdk_json.so 00:03:08.305 CC lib/env_dpdk/sigbus_handler.o 00:03:08.305 CC lib/env_dpdk/pci_dpdk.o 00:03:08.305 LIB libspdk_vmd.a 00:03:08.305 SYMLINK libspdk_idxd.so 00:03:08.305 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.305 SO libspdk_vmd.so.6.0 00:03:08.305 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.305 SYMLINK libspdk_vmd.so 00:03:08.305 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.305 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.305 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.305 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.305 LIB libspdk_jsonrpc.a 00:03:08.305 SO libspdk_jsonrpc.so.6.0 00:03:08.305 SYMLINK libspdk_jsonrpc.so 00:03:08.305 CC lib/rpc/rpc.o 00:03:08.305 LIB libspdk_env_dpdk.a 00:03:08.305 LIB libspdk_rpc.a 00:03:08.305 SO libspdk_rpc.so.6.0 00:03:08.305 SO libspdk_env_dpdk.so.15.0 00:03:08.305 SYMLINK libspdk_rpc.so 00:03:08.305 SYMLINK libspdk_env_dpdk.so 00:03:08.305 CC lib/notify/notify.o 00:03:08.305 CC lib/notify/notify_rpc.o 00:03:08.305 CC lib/keyring/keyring.o 00:03:08.305 CC lib/keyring/keyring_rpc.o 00:03:08.305 CC lib/trace/trace_flags.o 00:03:08.305 CC lib/trace/trace.o 00:03:08.305 CC lib/trace/trace_rpc.o 00:03:08.305 LIB libspdk_notify.a 00:03:08.305 SO libspdk_notify.so.6.0 00:03:08.305 LIB libspdk_keyring.a 00:03:08.305 SYMLINK libspdk_notify.so 00:03:08.305 LIB libspdk_trace.a 00:03:08.305 SO libspdk_keyring.so.2.0 00:03:08.305 SO libspdk_trace.so.11.0 00:03:08.305 SYMLINK libspdk_keyring.so 00:03:08.305 SYMLINK libspdk_trace.so 00:03:08.305 CC lib/sock/sock_rpc.o 00:03:08.305 CC lib/sock/sock.o 00:03:08.305 CC lib/thread/iobuf.o 00:03:08.305 CC lib/thread/thread.o 00:03:08.305 LIB libspdk_sock.a 00:03:08.305 SO libspdk_sock.so.10.0 00:03:08.305 SYMLINK libspdk_sock.so 00:03:08.565 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.565 CC lib/nvme/nvme_ctrlr.o 00:03:08.565 CC lib/nvme/nvme_fabric.o 00:03:08.565 CC lib/nvme/nvme_ns.o 00:03:08.565 CC lib/nvme/nvme_ns_cmd.o 00:03:08.565 CC lib/nvme/nvme_pcie.o 00:03:08.565 CC lib/nvme/nvme_pcie_common.o 00:03:08.565 CC lib/nvme/nvme_qpair.o 00:03:08.565 CC lib/nvme/nvme.o 00:03:09.501 LIB libspdk_thread.a 00:03:09.501 CC lib/nvme/nvme_quirks.o 00:03:09.501 CC lib/nvme/nvme_transport.o 00:03:09.501 SO libspdk_thread.so.10.2 00:03:09.501 CC lib/nvme/nvme_discovery.o 00:03:09.501 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.501 SYMLINK libspdk_thread.so 00:03:09.501 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.501 CC lib/nvme/nvme_tcp.o 00:03:09.759 CC lib/nvme/nvme_opal.o 00:03:09.759 CC lib/nvme/nvme_io_msg.o 00:03:09.759 CC lib/nvme/nvme_poll_group.o 00:03:10.016 CC lib/nvme/nvme_zns.o 00:03:10.016 CC lib/nvme/nvme_stubs.o 00:03:10.275 CC lib/nvme/nvme_auth.o 00:03:10.275 CC lib/nvme/nvme_cuse.o 00:03:10.275 CC lib/nvme/nvme_rdma.o 00:03:10.533 CC lib/accel/accel.o 00:03:10.533 CC lib/accel/accel_rpc.o 00:03:10.533 CC lib/blob/blobstore.o 00:03:10.791 CC lib/accel/accel_sw.o 00:03:10.791 CC lib/blob/request.o 00:03:10.791 CC lib/blob/zeroes.o 00:03:11.049 CC lib/blob/blob_bs_dev.o 00:03:11.308 CC lib/init/json_config.o 00:03:11.308 CC lib/init/rpc.o 00:03:11.308 CC lib/init/subsystem_rpc.o 00:03:11.308 CC lib/init/subsystem.o 00:03:11.308 CC lib/virtio/virtio.o 00:03:11.308 CC lib/virtio/virtio_vhost_user.o 00:03:11.308 CC lib/fsdev/fsdev.o 00:03:11.308 CC lib/virtio/virtio_vfio_user.o 00:03:11.566 CC lib/virtio/virtio_pci.o 00:03:11.566 CC lib/fsdev/fsdev_io.o 00:03:11.566 LIB libspdk_init.a 00:03:11.566 SO libspdk_init.so.6.0 00:03:11.566 LIB libspdk_accel.a 00:03:11.566 CC lib/fsdev/fsdev_rpc.o 00:03:11.566 SYMLINK libspdk_init.so 00:03:11.566 LIB libspdk_nvme.a 00:03:11.566 SO libspdk_accel.so.16.0 00:03:11.825 SYMLINK libspdk_accel.so 00:03:11.825 LIB libspdk_virtio.a 00:03:11.825 SO libspdk_nvme.so.14.0 00:03:11.825 SO libspdk_virtio.so.7.0 00:03:11.825 CC lib/event/app.o 00:03:11.825 CC lib/event/reactor.o 00:03:11.825 CC lib/event/log_rpc.o 00:03:11.825 CC lib/event/app_rpc.o 00:03:11.825 CC lib/event/scheduler_static.o 00:03:11.825 SYMLINK libspdk_virtio.so 00:03:11.825 CC lib/bdev/bdev.o 00:03:11.825 CC lib/bdev/bdev_rpc.o 00:03:12.083 LIB libspdk_fsdev.a 00:03:12.083 CC lib/bdev/bdev_zone.o 00:03:12.083 CC lib/bdev/part.o 00:03:12.083 SO libspdk_fsdev.so.1.0 00:03:12.083 SYMLINK libspdk_nvme.so 00:03:12.083 CC lib/bdev/scsi_nvme.o 00:03:12.083 SYMLINK libspdk_fsdev.so 00:03:12.342 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:12.342 LIB libspdk_event.a 00:03:12.342 SO libspdk_event.so.15.0 00:03:12.342 SYMLINK libspdk_event.so 00:03:12.915 LIB libspdk_fuse_dispatcher.a 00:03:12.915 SO libspdk_fuse_dispatcher.so.1.0 00:03:13.173 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.741 LIB libspdk_blob.a 00:03:13.741 SO libspdk_blob.so.11.0 00:03:13.741 SYMLINK libspdk_blob.so 00:03:14.000 CC lib/lvol/lvol.o 00:03:14.000 CC lib/blobfs/blobfs.o 00:03:14.000 CC lib/blobfs/tree.o 00:03:14.605 LIB libspdk_bdev.a 00:03:14.864 SO libspdk_bdev.so.17.0 00:03:14.864 SYMLINK libspdk_bdev.so 00:03:14.864 LIB libspdk_blobfs.a 00:03:14.864 SO libspdk_blobfs.so.10.0 00:03:15.123 SYMLINK libspdk_blobfs.so 00:03:15.123 CC lib/nbd/nbd.o 00:03:15.123 CC lib/nbd/nbd_rpc.o 00:03:15.123 CC lib/scsi/dev.o 00:03:15.123 CC lib/scsi/port.o 00:03:15.123 CC lib/scsi/scsi.o 00:03:15.123 CC lib/scsi/lun.o 00:03:15.123 CC lib/ublk/ublk.o 00:03:15.123 CC lib/nvmf/ctrlr.o 00:03:15.123 CC lib/ftl/ftl_core.o 00:03:15.123 LIB libspdk_lvol.a 00:03:15.123 SO libspdk_lvol.so.10.0 00:03:15.380 SYMLINK libspdk_lvol.so 00:03:15.380 CC lib/ftl/ftl_init.o 00:03:15.380 CC lib/ftl/ftl_layout.o 00:03:15.380 CC lib/ftl/ftl_debug.o 00:03:15.380 CC lib/ftl/ftl_io.o 00:03:15.380 CC lib/ftl/ftl_sb.o 00:03:15.380 CC lib/scsi/scsi_bdev.o 00:03:15.380 CC lib/scsi/scsi_pr.o 00:03:15.640 LIB libspdk_nbd.a 00:03:15.640 SO libspdk_nbd.so.7.0 00:03:15.640 CC lib/scsi/scsi_rpc.o 00:03:15.640 CC lib/scsi/task.o 00:03:15.640 CC lib/ublk/ublk_rpc.o 00:03:15.640 SYMLINK libspdk_nbd.so 00:03:15.640 CC lib/ftl/ftl_l2p.o 00:03:15.640 CC lib/nvmf/ctrlr_discovery.o 00:03:15.640 CC lib/ftl/ftl_l2p_flat.o 00:03:15.640 CC lib/nvmf/ctrlr_bdev.o 00:03:15.640 CC lib/nvmf/subsystem.o 00:03:15.899 LIB libspdk_ublk.a 00:03:15.899 CC lib/ftl/ftl_nv_cache.o 00:03:15.899 SO libspdk_ublk.so.3.0 00:03:15.899 CC lib/ftl/ftl_band.o 00:03:15.899 CC lib/ftl/ftl_band_ops.o 00:03:15.899 CC lib/ftl/ftl_writer.o 00:03:15.899 SYMLINK libspdk_ublk.so 00:03:15.899 CC lib/ftl/ftl_rq.o 00:03:15.899 LIB libspdk_scsi.a 00:03:15.899 SO libspdk_scsi.so.9.0 00:03:16.158 CC lib/ftl/ftl_reloc.o 00:03:16.158 SYMLINK libspdk_scsi.so 00:03:16.158 CC lib/ftl/ftl_l2p_cache.o 00:03:16.158 CC lib/nvmf/nvmf.o 00:03:16.158 CC lib/ftl/ftl_p2l.o 00:03:16.158 CC lib/ftl/ftl_p2l_log.o 00:03:16.158 CC lib/ftl/mngt/ftl_mngt.o 00:03:16.416 CC lib/nvmf/nvmf_rpc.o 00:03:16.416 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:16.416 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:16.674 CC lib/nvmf/transport.o 00:03:16.674 CC lib/nvmf/tcp.o 00:03:16.674 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:16.674 CC lib/nvmf/stubs.o 00:03:16.674 CC lib/nvmf/mdns_server.o 00:03:16.932 CC lib/nvmf/rdma.o 00:03:16.932 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:16.932 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:16.932 CC lib/nvmf/auth.o 00:03:16.932 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:17.190 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:17.190 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:17.190 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:17.190 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:17.190 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:17.190 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:17.449 CC lib/iscsi/conn.o 00:03:17.449 CC lib/ftl/utils/ftl_conf.o 00:03:17.449 CC lib/ftl/utils/ftl_md.o 00:03:17.449 CC lib/ftl/utils/ftl_mempool.o 00:03:17.449 CC lib/ftl/utils/ftl_bitmap.o 00:03:17.449 CC lib/ftl/utils/ftl_property.o 00:03:17.708 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:17.709 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:17.709 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:17.709 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:17.709 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:17.968 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:17.968 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:17.968 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:17.968 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:17.968 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:17.968 CC lib/iscsi/init_grp.o 00:03:17.968 CC lib/iscsi/iscsi.o 00:03:18.226 CC lib/iscsi/param.o 00:03:18.226 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.226 CC lib/vhost/vhost.o 00:03:18.226 CC lib/vhost/vhost_rpc.o 00:03:18.226 CC lib/vhost/vhost_scsi.o 00:03:18.226 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:18.226 CC lib/vhost/vhost_blk.o 00:03:18.226 CC lib/vhost/rte_vhost_user.o 00:03:18.485 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:18.485 CC lib/ftl/base/ftl_base_dev.o 00:03:18.485 CC lib/iscsi/portal_grp.o 00:03:18.744 CC lib/iscsi/tgt_node.o 00:03:18.744 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.744 CC lib/iscsi/iscsi_subsystem.o 00:03:18.744 CC lib/ftl/ftl_trace.o 00:03:18.744 CC lib/iscsi/iscsi_rpc.o 00:03:19.003 CC lib/iscsi/task.o 00:03:19.003 LIB libspdk_nvmf.a 00:03:19.003 LIB libspdk_ftl.a 00:03:19.003 SO libspdk_nvmf.so.19.0 00:03:19.262 SYMLINK libspdk_nvmf.so 00:03:19.262 SO libspdk_ftl.so.9.0 00:03:19.520 LIB libspdk_vhost.a 00:03:19.520 LIB libspdk_iscsi.a 00:03:19.520 SO libspdk_vhost.so.8.0 00:03:19.520 SO libspdk_iscsi.so.8.0 00:03:19.520 SYMLINK libspdk_vhost.so 00:03:19.779 SYMLINK libspdk_ftl.so 00:03:19.779 SYMLINK libspdk_iscsi.so 00:03:20.038 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.038 CC module/keyring/file/keyring.o 00:03:20.038 CC module/fsdev/aio/fsdev_aio.o 00:03:20.038 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.038 CC module/keyring/linux/keyring.o 00:03:20.038 CC module/accel/error/accel_error.o 00:03:20.038 CC module/accel/dsa/accel_dsa.o 00:03:20.038 CC module/accel/ioat/accel_ioat.o 00:03:20.038 CC module/sock/posix/posix.o 00:03:20.038 CC module/blob/bdev/blob_bdev.o 00:03:20.296 LIB libspdk_env_dpdk_rpc.a 00:03:20.296 SO libspdk_env_dpdk_rpc.so.6.0 00:03:20.296 SYMLINK libspdk_env_dpdk_rpc.so 00:03:20.296 CC module/accel/dsa/accel_dsa_rpc.o 00:03:20.296 CC module/keyring/linux/keyring_rpc.o 00:03:20.296 CC module/keyring/file/keyring_rpc.o 00:03:20.296 CC module/accel/error/accel_error_rpc.o 00:03:20.296 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.296 LIB libspdk_scheduler_dynamic.a 00:03:20.296 SO libspdk_scheduler_dynamic.so.4.0 00:03:20.296 LIB libspdk_keyring_linux.a 00:03:20.555 LIB libspdk_accel_dsa.a 00:03:20.555 LIB libspdk_blob_bdev.a 00:03:20.555 LIB libspdk_keyring_file.a 00:03:20.555 SO libspdk_keyring_linux.so.1.0 00:03:20.555 SO libspdk_accel_dsa.so.5.0 00:03:20.555 SO libspdk_blob_bdev.so.11.0 00:03:20.555 SYMLINK libspdk_scheduler_dynamic.so 00:03:20.555 SO libspdk_keyring_file.so.2.0 00:03:20.555 LIB libspdk_accel_error.a 00:03:20.555 LIB libspdk_accel_ioat.a 00:03:20.555 SYMLINK libspdk_keyring_linux.so 00:03:20.555 SO libspdk_accel_error.so.2.0 00:03:20.555 SYMLINK libspdk_accel_dsa.so 00:03:20.555 SYMLINK libspdk_keyring_file.so 00:03:20.555 SYMLINK libspdk_blob_bdev.so 00:03:20.555 SO libspdk_accel_ioat.so.6.0 00:03:20.555 SYMLINK libspdk_accel_ioat.so 00:03:20.555 SYMLINK libspdk_accel_error.so 00:03:20.555 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:20.555 CC module/fsdev/aio/linux_aio_mgr.o 00:03:20.555 CC module/accel/iaa/accel_iaa.o 00:03:20.555 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:20.814 CC module/scheduler/gscheduler/gscheduler.o 00:03:20.814 CC module/sock/uring/uring.o 00:03:20.814 CC module/accel/iaa/accel_iaa_rpc.o 00:03:20.814 LIB libspdk_fsdev_aio.a 00:03:20.814 LIB libspdk_scheduler_dpdk_governor.a 00:03:20.814 CC module/bdev/delay/vbdev_delay.o 00:03:20.814 CC module/blobfs/bdev/blobfs_bdev.o 00:03:20.814 LIB libspdk_sock_posix.a 00:03:20.814 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:20.814 SO libspdk_fsdev_aio.so.1.0 00:03:20.814 SO libspdk_sock_posix.so.6.0 00:03:20.814 LIB libspdk_scheduler_gscheduler.a 00:03:21.073 LIB libspdk_accel_iaa.a 00:03:21.073 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:21.073 SO libspdk_scheduler_gscheduler.so.4.0 00:03:21.073 SYMLINK libspdk_fsdev_aio.so 00:03:21.073 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.073 CC module/bdev/error/vbdev_error.o 00:03:21.073 SYMLINK libspdk_sock_posix.so 00:03:21.073 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.073 SO libspdk_accel_iaa.so.3.0 00:03:21.073 SYMLINK libspdk_scheduler_gscheduler.so 00:03:21.073 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.073 CC module/bdev/gpt/gpt.o 00:03:21.073 SYMLINK libspdk_accel_iaa.so 00:03:21.073 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.073 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.073 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.332 CC module/bdev/malloc/bdev_malloc.o 00:03:21.332 LIB libspdk_blobfs_bdev.a 00:03:21.332 CC module/bdev/null/bdev_null.o 00:03:21.332 LIB libspdk_bdev_delay.a 00:03:21.332 CC module/bdev/null/bdev_null_rpc.o 00:03:21.332 LIB libspdk_bdev_error.a 00:03:21.332 SO libspdk_blobfs_bdev.so.6.0 00:03:21.332 SO libspdk_bdev_delay.so.6.0 00:03:21.332 SO libspdk_bdev_error.so.6.0 00:03:21.332 SYMLINK libspdk_blobfs_bdev.so 00:03:21.332 SYMLINK libspdk_bdev_error.so 00:03:21.332 SYMLINK libspdk_bdev_delay.so 00:03:21.332 LIB libspdk_bdev_gpt.a 00:03:21.590 LIB libspdk_sock_uring.a 00:03:21.590 SO libspdk_bdev_gpt.so.6.0 00:03:21.590 SO libspdk_sock_uring.so.5.0 00:03:21.590 LIB libspdk_bdev_null.a 00:03:21.590 CC module/bdev/nvme/bdev_nvme.o 00:03:21.590 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.590 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.590 CC module/bdev/raid/bdev_raid.o 00:03:21.590 SO libspdk_bdev_null.so.6.0 00:03:21.590 SYMLINK libspdk_bdev_gpt.so 00:03:21.590 SYMLINK libspdk_sock_uring.so 00:03:21.590 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.590 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.590 CC module/bdev/split/vbdev_split.o 00:03:21.590 SYMLINK libspdk_bdev_null.so 00:03:21.590 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.590 LIB libspdk_bdev_lvol.a 00:03:21.590 SO libspdk_bdev_lvol.so.6.0 00:03:21.848 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.848 CC module/bdev/nvme/nvme_rpc.o 00:03:21.848 SYMLINK libspdk_bdev_lvol.so 00:03:21.848 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:21.848 LIB libspdk_bdev_malloc.a 00:03:21.848 SO libspdk_bdev_malloc.so.6.0 00:03:21.848 LIB libspdk_bdev_passthru.a 00:03:21.848 LIB libspdk_bdev_split.a 00:03:21.848 SO libspdk_bdev_passthru.so.6.0 00:03:21.848 SYMLINK libspdk_bdev_malloc.so 00:03:21.848 SO libspdk_bdev_split.so.6.0 00:03:21.848 SYMLINK libspdk_bdev_passthru.so 00:03:21.848 CC module/bdev/uring/bdev_uring.o 00:03:22.107 SYMLINK libspdk_bdev_split.so 00:03:22.107 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.107 CC module/bdev/uring/bdev_uring_rpc.o 00:03:22.107 CC module/bdev/aio/bdev_aio.o 00:03:22.107 LIB libspdk_bdev_zone_block.a 00:03:22.107 CC module/bdev/ftl/bdev_ftl.o 00:03:22.107 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.107 SO libspdk_bdev_zone_block.so.6.0 00:03:22.107 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.107 CC module/bdev/nvme/vbdev_opal.o 00:03:22.107 SYMLINK libspdk_bdev_zone_block.so 00:03:22.107 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.107 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.365 LIB libspdk_bdev_uring.a 00:03:22.365 SO libspdk_bdev_uring.so.6.0 00:03:22.365 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.365 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.365 LIB libspdk_bdev_ftl.a 00:03:22.365 LIB libspdk_bdev_aio.a 00:03:22.365 SYMLINK libspdk_bdev_uring.so 00:03:22.365 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.365 SO libspdk_bdev_ftl.so.6.0 00:03:22.624 SO libspdk_bdev_aio.so.6.0 00:03:22.624 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.624 CC module/bdev/raid/raid0.o 00:03:22.624 SYMLINK libspdk_bdev_ftl.so 00:03:22.624 CC module/bdev/raid/raid1.o 00:03:22.624 CC module/bdev/raid/concat.o 00:03:22.624 SYMLINK libspdk_bdev_aio.so 00:03:22.624 LIB libspdk_bdev_iscsi.a 00:03:22.624 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.624 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.624 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.624 SO libspdk_bdev_iscsi.so.6.0 00:03:22.883 SYMLINK libspdk_bdev_iscsi.so 00:03:22.883 LIB libspdk_bdev_raid.a 00:03:22.883 SO libspdk_bdev_raid.so.6.0 00:03:23.141 SYMLINK libspdk_bdev_raid.so 00:03:23.141 LIB libspdk_bdev_virtio.a 00:03:23.399 SO libspdk_bdev_virtio.so.6.0 00:03:23.399 SYMLINK libspdk_bdev_virtio.so 00:03:23.966 LIB libspdk_bdev_nvme.a 00:03:23.966 SO libspdk_bdev_nvme.so.7.0 00:03:23.966 SYMLINK libspdk_bdev_nvme.so 00:03:24.533 CC module/event/subsystems/vmd/vmd.o 00:03:24.533 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.533 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.533 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.533 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.533 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.533 CC module/event/subsystems/sock/sock.o 00:03:24.533 CC module/event/subsystems/fsdev/fsdev.o 00:03:24.533 CC module/event/subsystems/keyring/keyring.o 00:03:24.791 LIB libspdk_event_scheduler.a 00:03:24.791 LIB libspdk_event_vhost_blk.a 00:03:24.791 LIB libspdk_event_vmd.a 00:03:24.791 LIB libspdk_event_iobuf.a 00:03:24.791 LIB libspdk_event_fsdev.a 00:03:24.791 SO libspdk_event_scheduler.so.4.0 00:03:24.791 LIB libspdk_event_keyring.a 00:03:24.791 LIB libspdk_event_sock.a 00:03:24.791 SO libspdk_event_vhost_blk.so.3.0 00:03:24.791 SO libspdk_event_vmd.so.6.0 00:03:24.791 SO libspdk_event_fsdev.so.1.0 00:03:24.791 SO libspdk_event_keyring.so.1.0 00:03:24.791 SO libspdk_event_iobuf.so.3.0 00:03:24.791 SO libspdk_event_sock.so.5.0 00:03:24.791 SYMLINK libspdk_event_scheduler.so 00:03:24.791 SYMLINK libspdk_event_vhost_blk.so 00:03:24.791 SYMLINK libspdk_event_vmd.so 00:03:24.791 SYMLINK libspdk_event_keyring.so 00:03:24.791 SYMLINK libspdk_event_fsdev.so 00:03:24.791 SYMLINK libspdk_event_sock.so 00:03:24.791 SYMLINK libspdk_event_iobuf.so 00:03:25.050 CC module/event/subsystems/accel/accel.o 00:03:25.309 LIB libspdk_event_accel.a 00:03:25.309 SO libspdk_event_accel.so.6.0 00:03:25.309 SYMLINK libspdk_event_accel.so 00:03:25.567 CC module/event/subsystems/bdev/bdev.o 00:03:25.826 LIB libspdk_event_bdev.a 00:03:25.826 SO libspdk_event_bdev.so.6.0 00:03:26.084 SYMLINK libspdk_event_bdev.so 00:03:26.343 CC module/event/subsystems/ublk/ublk.o 00:03:26.343 CC module/event/subsystems/nbd/nbd.o 00:03:26.343 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:26.343 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:26.343 CC module/event/subsystems/scsi/scsi.o 00:03:26.343 LIB libspdk_event_nbd.a 00:03:26.343 LIB libspdk_event_ublk.a 00:03:26.343 SO libspdk_event_nbd.so.6.0 00:03:26.343 LIB libspdk_event_scsi.a 00:03:26.343 SO libspdk_event_ublk.so.3.0 00:03:26.602 SO libspdk_event_scsi.so.6.0 00:03:26.602 SYMLINK libspdk_event_nbd.so 00:03:26.602 SYMLINK libspdk_event_ublk.so 00:03:26.602 SYMLINK libspdk_event_scsi.so 00:03:26.602 LIB libspdk_event_nvmf.a 00:03:26.602 SO libspdk_event_nvmf.so.6.0 00:03:26.602 SYMLINK libspdk_event_nvmf.so 00:03:26.860 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.860 CC module/event/subsystems/iscsi/iscsi.o 00:03:26.860 LIB libspdk_event_vhost_scsi.a 00:03:26.860 SO libspdk_event_vhost_scsi.so.3.0 00:03:27.118 LIB libspdk_event_iscsi.a 00:03:27.118 SYMLINK libspdk_event_vhost_scsi.so 00:03:27.118 SO libspdk_event_iscsi.so.6.0 00:03:27.118 SYMLINK libspdk_event_iscsi.so 00:03:27.377 SO libspdk.so.6.0 00:03:27.377 SYMLINK libspdk.so 00:03:27.636 CC app/trace_record/trace_record.o 00:03:27.636 CXX app/trace/trace.o 00:03:27.636 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:27.636 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.636 CC app/nvmf_tgt/nvmf_main.o 00:03:27.636 CC test/thread/poller_perf/poller_perf.o 00:03:27.636 CC examples/ioat/perf/perf.o 00:03:27.636 CC examples/util/zipf/zipf.o 00:03:27.636 CC test/dma/test_dma/test_dma.o 00:03:27.636 CC test/app/bdev_svc/bdev_svc.o 00:03:27.895 LINK interrupt_tgt 00:03:27.895 LINK nvmf_tgt 00:03:27.895 LINK spdk_trace_record 00:03:27.895 LINK poller_perf 00:03:27.895 LINK zipf 00:03:27.895 LINK iscsi_tgt 00:03:27.896 LINK ioat_perf 00:03:27.896 LINK bdev_svc 00:03:28.155 LINK spdk_trace 00:03:28.155 CC app/spdk_lspci/spdk_lspci.o 00:03:28.155 TEST_HEADER include/spdk/accel.h 00:03:28.156 CC examples/ioat/verify/verify.o 00:03:28.156 TEST_HEADER include/spdk/accel_module.h 00:03:28.156 TEST_HEADER include/spdk/assert.h 00:03:28.156 TEST_HEADER include/spdk/barrier.h 00:03:28.156 TEST_HEADER include/spdk/base64.h 00:03:28.156 TEST_HEADER include/spdk/bdev.h 00:03:28.156 TEST_HEADER include/spdk/bdev_module.h 00:03:28.156 TEST_HEADER include/spdk/bdev_zone.h 00:03:28.156 TEST_HEADER include/spdk/bit_array.h 00:03:28.156 TEST_HEADER include/spdk/bit_pool.h 00:03:28.156 TEST_HEADER include/spdk/blob_bdev.h 00:03:28.156 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:28.156 TEST_HEADER include/spdk/blobfs.h 00:03:28.156 TEST_HEADER include/spdk/blob.h 00:03:28.156 TEST_HEADER include/spdk/conf.h 00:03:28.156 TEST_HEADER include/spdk/config.h 00:03:28.156 TEST_HEADER include/spdk/cpuset.h 00:03:28.156 TEST_HEADER include/spdk/crc16.h 00:03:28.156 TEST_HEADER include/spdk/crc32.h 00:03:28.156 TEST_HEADER include/spdk/crc64.h 00:03:28.156 TEST_HEADER include/spdk/dif.h 00:03:28.156 TEST_HEADER include/spdk/dma.h 00:03:28.156 TEST_HEADER include/spdk/endian.h 00:03:28.156 TEST_HEADER include/spdk/env_dpdk.h 00:03:28.156 TEST_HEADER include/spdk/env.h 00:03:28.156 TEST_HEADER include/spdk/event.h 00:03:28.156 TEST_HEADER include/spdk/fd_group.h 00:03:28.156 TEST_HEADER include/spdk/fd.h 00:03:28.156 TEST_HEADER include/spdk/file.h 00:03:28.156 TEST_HEADER include/spdk/fsdev.h 00:03:28.156 TEST_HEADER include/spdk/fsdev_module.h 00:03:28.156 TEST_HEADER include/spdk/ftl.h 00:03:28.156 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:28.156 TEST_HEADER include/spdk/gpt_spec.h 00:03:28.156 CC app/spdk_nvme_perf/perf.o 00:03:28.156 CC app/spdk_tgt/spdk_tgt.o 00:03:28.156 TEST_HEADER include/spdk/hexlify.h 00:03:28.156 TEST_HEADER include/spdk/histogram_data.h 00:03:28.156 TEST_HEADER include/spdk/idxd.h 00:03:28.156 TEST_HEADER include/spdk/idxd_spec.h 00:03:28.156 TEST_HEADER include/spdk/init.h 00:03:28.156 TEST_HEADER include/spdk/ioat.h 00:03:28.156 TEST_HEADER include/spdk/ioat_spec.h 00:03:28.156 TEST_HEADER include/spdk/iscsi_spec.h 00:03:28.156 TEST_HEADER include/spdk/json.h 00:03:28.156 TEST_HEADER include/spdk/jsonrpc.h 00:03:28.415 TEST_HEADER include/spdk/keyring.h 00:03:28.415 TEST_HEADER include/spdk/keyring_module.h 00:03:28.415 TEST_HEADER include/spdk/likely.h 00:03:28.415 TEST_HEADER include/spdk/log.h 00:03:28.415 TEST_HEADER include/spdk/lvol.h 00:03:28.415 TEST_HEADER include/spdk/md5.h 00:03:28.415 TEST_HEADER include/spdk/memory.h 00:03:28.415 TEST_HEADER include/spdk/mmio.h 00:03:28.415 TEST_HEADER include/spdk/nbd.h 00:03:28.415 TEST_HEADER include/spdk/net.h 00:03:28.415 TEST_HEADER include/spdk/notify.h 00:03:28.415 TEST_HEADER include/spdk/nvme.h 00:03:28.415 TEST_HEADER include/spdk/nvme_intel.h 00:03:28.415 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:28.415 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:28.415 TEST_HEADER include/spdk/nvme_spec.h 00:03:28.415 TEST_HEADER include/spdk/nvme_zns.h 00:03:28.415 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:28.415 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:28.415 TEST_HEADER include/spdk/nvmf.h 00:03:28.415 LINK test_dma 00:03:28.415 TEST_HEADER include/spdk/nvmf_spec.h 00:03:28.415 TEST_HEADER include/spdk/nvmf_transport.h 00:03:28.415 TEST_HEADER include/spdk/opal.h 00:03:28.415 TEST_HEADER include/spdk/opal_spec.h 00:03:28.415 TEST_HEADER include/spdk/pci_ids.h 00:03:28.415 TEST_HEADER include/spdk/pipe.h 00:03:28.415 TEST_HEADER include/spdk/queue.h 00:03:28.416 CC examples/thread/thread/thread_ex.o 00:03:28.416 LINK spdk_lspci 00:03:28.416 TEST_HEADER include/spdk/reduce.h 00:03:28.416 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:28.416 TEST_HEADER include/spdk/rpc.h 00:03:28.416 TEST_HEADER include/spdk/scheduler.h 00:03:28.416 TEST_HEADER include/spdk/scsi.h 00:03:28.416 TEST_HEADER include/spdk/scsi_spec.h 00:03:28.416 TEST_HEADER include/spdk/sock.h 00:03:28.416 TEST_HEADER include/spdk/stdinc.h 00:03:28.416 TEST_HEADER include/spdk/string.h 00:03:28.416 TEST_HEADER include/spdk/thread.h 00:03:28.416 TEST_HEADER include/spdk/trace.h 00:03:28.416 TEST_HEADER include/spdk/trace_parser.h 00:03:28.416 TEST_HEADER include/spdk/tree.h 00:03:28.416 TEST_HEADER include/spdk/ublk.h 00:03:28.416 TEST_HEADER include/spdk/util.h 00:03:28.416 TEST_HEADER include/spdk/uuid.h 00:03:28.416 TEST_HEADER include/spdk/version.h 00:03:28.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:28.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:28.416 TEST_HEADER include/spdk/vhost.h 00:03:28.416 TEST_HEADER include/spdk/vmd.h 00:03:28.416 TEST_HEADER include/spdk/xor.h 00:03:28.416 TEST_HEADER include/spdk/zipf.h 00:03:28.416 CXX test/cpp_headers/accel.o 00:03:28.416 CC test/app/histogram_perf/histogram_perf.o 00:03:28.416 LINK verify 00:03:28.416 LINK spdk_tgt 00:03:28.416 CXX test/cpp_headers/accel_module.o 00:03:28.674 CC test/env/mem_callbacks/mem_callbacks.o 00:03:28.674 CXX test/cpp_headers/assert.o 00:03:28.674 CXX test/cpp_headers/barrier.o 00:03:28.674 LINK thread 00:03:28.674 LINK histogram_perf 00:03:28.674 LINK nvme_fuzz 00:03:28.674 CXX test/cpp_headers/base64.o 00:03:28.674 CXX test/cpp_headers/bdev.o 00:03:28.933 CC examples/sock/hello_world/hello_sock.o 00:03:28.933 CXX test/cpp_headers/bdev_module.o 00:03:28.933 CC test/env/vtophys/vtophys.o 00:03:28.933 CC examples/vmd/lsvmd/lsvmd.o 00:03:28.933 CC examples/idxd/perf/perf.o 00:03:29.191 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.191 CC app/spdk_nvme_identify/identify.o 00:03:29.191 LINK vtophys 00:03:29.191 CXX test/cpp_headers/bdev_zone.o 00:03:29.191 LINK hello_sock 00:03:29.191 LINK lsvmd 00:03:29.191 CC examples/accel/perf/accel_perf.o 00:03:29.191 LINK mem_callbacks 00:03:29.191 LINK spdk_nvme_perf 00:03:29.191 CXX test/cpp_headers/bit_array.o 00:03:29.191 CXX test/cpp_headers/bit_pool.o 00:03:29.450 CXX test/cpp_headers/blob_bdev.o 00:03:29.450 LINK idxd_perf 00:03:29.450 CC examples/vmd/led/led.o 00:03:29.450 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:29.708 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.709 LINK led 00:03:29.709 LINK env_dpdk_post_init 00:03:29.709 CC examples/nvme/hello_world/hello_world.o 00:03:29.709 CC examples/blob/hello_world/hello_blob.o 00:03:29.709 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:29.709 CC examples/blob/cli/blobcli.o 00:03:29.709 LINK accel_perf 00:03:29.709 CXX test/cpp_headers/blobfs.o 00:03:29.968 LINK hello_world 00:03:29.968 CC test/env/memory/memory_ut.o 00:03:29.968 CXX test/cpp_headers/blob.o 00:03:29.968 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.968 LINK hello_blob 00:03:29.968 LINK spdk_nvme_identify 00:03:29.968 LINK hello_fsdev 00:03:30.227 CC examples/nvme/reconnect/reconnect.o 00:03:30.227 CXX test/cpp_headers/conf.o 00:03:30.227 LINK spdk_nvme_discover 00:03:30.227 LINK blobcli 00:03:30.227 CC app/spdk_top/spdk_top.o 00:03:30.227 CC test/app/jsoncat/jsoncat.o 00:03:30.227 CXX test/cpp_headers/config.o 00:03:30.227 CC examples/bdev/hello_world/hello_bdev.o 00:03:30.227 CC test/app/stub/stub.o 00:03:30.227 CXX test/cpp_headers/cpuset.o 00:03:30.486 CXX test/cpp_headers/crc16.o 00:03:30.486 LINK jsoncat 00:03:30.486 LINK reconnect 00:03:30.486 CC app/vhost/vhost.o 00:03:30.486 LINK stub 00:03:30.486 LINK hello_bdev 00:03:30.745 CXX test/cpp_headers/crc32.o 00:03:30.745 CC app/spdk_dd/spdk_dd.o 00:03:30.745 LINK vhost 00:03:30.745 LINK iscsi_fuzz 00:03:30.745 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.745 CC app/fio/nvme/fio_plugin.o 00:03:30.745 CXX test/cpp_headers/crc64.o 00:03:31.004 CC examples/bdev/bdevperf/bdevperf.o 00:03:31.004 CC examples/nvme/arbitration/arbitration.o 00:03:31.004 CXX test/cpp_headers/dif.o 00:03:31.004 CC app/fio/bdev/fio_plugin.o 00:03:31.004 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:31.263 LINK memory_ut 00:03:31.263 LINK spdk_dd 00:03:31.263 LINK spdk_top 00:03:31.263 CXX test/cpp_headers/dma.o 00:03:31.263 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:31.263 LINK arbitration 00:03:31.263 LINK nvme_manage 00:03:31.522 CXX test/cpp_headers/endian.o 00:03:31.522 LINK spdk_nvme 00:03:31.522 CC examples/nvme/hotplug/hotplug.o 00:03:31.522 CC test/env/pci/pci_ut.o 00:03:31.522 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.522 CXX test/cpp_headers/env_dpdk.o 00:03:31.522 CC examples/nvme/abort/abort.o 00:03:31.522 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:31.522 LINK spdk_bdev 00:03:31.781 LINK cmb_copy 00:03:31.781 CC test/event/event_perf/event_perf.o 00:03:31.781 LINK hotplug 00:03:31.781 LINK bdevperf 00:03:31.781 LINK vhost_fuzz 00:03:31.781 CXX test/cpp_headers/env.o 00:03:31.781 LINK pmr_persistence 00:03:31.781 CC test/event/reactor/reactor.o 00:03:31.781 LINK pci_ut 00:03:31.781 LINK event_perf 00:03:32.040 CXX test/cpp_headers/event.o 00:03:32.040 CC test/event/reactor_perf/reactor_perf.o 00:03:32.040 LINK abort 00:03:32.040 CC test/event/app_repeat/app_repeat.o 00:03:32.040 CC test/rpc_client/rpc_client_test.o 00:03:32.040 LINK reactor 00:03:32.040 CC test/nvme/aer/aer.o 00:03:32.040 CXX test/cpp_headers/fd_group.o 00:03:32.040 LINK reactor_perf 00:03:32.299 CC test/event/scheduler/scheduler.o 00:03:32.299 LINK app_repeat 00:03:32.299 CC test/accel/dif/dif.o 00:03:32.299 LINK rpc_client_test 00:03:32.299 CXX test/cpp_headers/fd.o 00:03:32.299 CC test/blobfs/mkfs/mkfs.o 00:03:32.299 CC test/nvme/reset/reset.o 00:03:32.299 CC examples/nvmf/nvmf/nvmf.o 00:03:32.557 LINK aer 00:03:32.557 CXX test/cpp_headers/file.o 00:03:32.557 LINK scheduler 00:03:32.557 CC test/nvme/sgl/sgl.o 00:03:32.557 CC test/lvol/esnap/esnap.o 00:03:32.557 CXX test/cpp_headers/fsdev.o 00:03:32.557 CXX test/cpp_headers/fsdev_module.o 00:03:32.557 LINK mkfs 00:03:32.557 CXX test/cpp_headers/ftl.o 00:03:32.816 LINK reset 00:03:32.816 CC test/nvme/e2edp/nvme_dp.o 00:03:32.816 CXX test/cpp_headers/fuse_dispatcher.o 00:03:32.816 LINK nvmf 00:03:32.816 LINK sgl 00:03:32.816 LINK dif 00:03:32.816 CXX test/cpp_headers/gpt_spec.o 00:03:32.816 CC test/nvme/overhead/overhead.o 00:03:33.075 CC test/nvme/err_injection/err_injection.o 00:03:33.075 CC test/nvme/startup/startup.o 00:03:33.075 CXX test/cpp_headers/hexlify.o 00:03:33.075 CXX test/cpp_headers/histogram_data.o 00:03:33.075 CC test/nvme/reserve/reserve.o 00:03:33.075 LINK nvme_dp 00:03:33.075 LINK startup 00:03:33.075 CXX test/cpp_headers/idxd.o 00:03:33.075 LINK err_injection 00:03:33.333 CC test/nvme/simple_copy/simple_copy.o 00:03:33.333 LINK overhead 00:03:33.333 CC test/nvme/connect_stress/connect_stress.o 00:03:33.333 LINK reserve 00:03:33.333 CC test/nvme/boot_partition/boot_partition.o 00:03:33.333 CC test/nvme/compliance/nvme_compliance.o 00:03:33.333 CXX test/cpp_headers/idxd_spec.o 00:03:33.333 CXX test/cpp_headers/init.o 00:03:33.333 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.333 LINK connect_stress 00:03:33.592 LINK simple_copy 00:03:33.592 LINK boot_partition 00:03:33.592 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.592 CC test/nvme/fdp/fdp.o 00:03:33.592 CXX test/cpp_headers/ioat.o 00:03:33.592 LINK fused_ordering 00:03:33.592 LINK nvme_compliance 00:03:33.592 CC test/nvme/cuse/cuse.o 00:03:33.592 CXX test/cpp_headers/ioat_spec.o 00:03:33.592 CXX test/cpp_headers/iscsi_spec.o 00:03:33.592 LINK doorbell_aers 00:03:33.851 CXX test/cpp_headers/json.o 00:03:33.851 CXX test/cpp_headers/jsonrpc.o 00:03:33.851 CXX test/cpp_headers/keyring.o 00:03:33.851 CC test/bdev/bdevio/bdevio.o 00:03:33.851 CXX test/cpp_headers/keyring_module.o 00:03:33.851 CXX test/cpp_headers/likely.o 00:03:33.851 LINK fdp 00:03:33.851 CXX test/cpp_headers/log.o 00:03:33.851 CXX test/cpp_headers/lvol.o 00:03:34.109 CXX test/cpp_headers/md5.o 00:03:34.109 CXX test/cpp_headers/memory.o 00:03:34.109 CXX test/cpp_headers/mmio.o 00:03:34.109 CXX test/cpp_headers/nbd.o 00:03:34.109 CXX test/cpp_headers/net.o 00:03:34.109 CXX test/cpp_headers/notify.o 00:03:34.109 CXX test/cpp_headers/nvme.o 00:03:34.109 CXX test/cpp_headers/nvme_intel.o 00:03:34.109 CXX test/cpp_headers/nvme_ocssd.o 00:03:34.109 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:34.368 CXX test/cpp_headers/nvme_spec.o 00:03:34.368 CXX test/cpp_headers/nvme_zns.o 00:03:34.368 LINK bdevio 00:03:34.368 CXX test/cpp_headers/nvmf_cmd.o 00:03:34.368 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.368 CXX test/cpp_headers/nvmf.o 00:03:34.368 CXX test/cpp_headers/nvmf_spec.o 00:03:34.368 CXX test/cpp_headers/nvmf_transport.o 00:03:34.368 CXX test/cpp_headers/opal.o 00:03:34.368 CXX test/cpp_headers/opal_spec.o 00:03:34.368 CXX test/cpp_headers/pci_ids.o 00:03:34.368 CXX test/cpp_headers/pipe.o 00:03:34.368 CXX test/cpp_headers/queue.o 00:03:34.627 CXX test/cpp_headers/reduce.o 00:03:34.627 CXX test/cpp_headers/rpc.o 00:03:34.627 CXX test/cpp_headers/scheduler.o 00:03:34.627 CXX test/cpp_headers/scsi.o 00:03:34.627 CXX test/cpp_headers/scsi_spec.o 00:03:34.627 CXX test/cpp_headers/sock.o 00:03:34.627 CXX test/cpp_headers/stdinc.o 00:03:34.627 CXX test/cpp_headers/string.o 00:03:34.627 CXX test/cpp_headers/thread.o 00:03:34.627 CXX test/cpp_headers/trace.o 00:03:34.887 CXX test/cpp_headers/trace_parser.o 00:03:34.887 CXX test/cpp_headers/tree.o 00:03:34.887 CXX test/cpp_headers/ublk.o 00:03:34.887 CXX test/cpp_headers/util.o 00:03:34.887 CXX test/cpp_headers/uuid.o 00:03:34.887 CXX test/cpp_headers/version.o 00:03:34.887 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.887 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.887 CXX test/cpp_headers/vhost.o 00:03:34.887 CXX test/cpp_headers/vmd.o 00:03:34.887 CXX test/cpp_headers/xor.o 00:03:35.146 CXX test/cpp_headers/zipf.o 00:03:35.146 LINK cuse 00:03:38.432 LINK esnap 00:03:38.432 00:03:38.432 real 1m34.485s 00:03:38.432 user 8m23.589s 00:03:38.432 sys 1m45.707s 00:03:38.432 09:09:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:38.432 ************************************ 00:03:38.432 09:09:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:38.432 END TEST make 00:03:38.432 ************************************ 00:03:38.432 09:09:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:38.432 09:09:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:38.432 09:09:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:38.432 09:09:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.432 09:09:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:38.432 09:09:30 -- pm/common@44 -- $ pid=5402 00:03:38.432 09:09:30 -- pm/common@50 -- $ kill -TERM 5402 00:03:38.432 09:09:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.432 09:09:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:38.432 09:09:30 -- pm/common@44 -- $ pid=5403 00:03:38.432 09:09:30 -- pm/common@50 -- $ kill -TERM 5403 00:03:38.691 09:09:30 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:38.691 09:09:30 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:38.691 09:09:30 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:38.691 09:09:30 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:38.691 09:09:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.691 09:09:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.691 09:09:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.691 09:09:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.691 09:09:30 -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.691 09:09:30 -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.691 09:09:30 -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.691 09:09:30 -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.691 09:09:30 -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.691 09:09:30 -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.692 09:09:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.692 09:09:30 -- scripts/common.sh@344 -- # case "$op" in 00:03:38.692 09:09:30 -- scripts/common.sh@345 -- # : 1 00:03:38.692 09:09:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.692 09:09:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.692 09:09:30 -- scripts/common.sh@365 -- # decimal 1 00:03:38.692 09:09:30 -- scripts/common.sh@353 -- # local d=1 00:03:38.692 09:09:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.692 09:09:30 -- scripts/common.sh@355 -- # echo 1 00:03:38.692 09:09:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.692 09:09:30 -- scripts/common.sh@366 -- # decimal 2 00:03:38.692 09:09:30 -- scripts/common.sh@353 -- # local d=2 00:03:38.692 09:09:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.692 09:09:30 -- scripts/common.sh@355 -- # echo 2 00:03:38.692 09:09:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.692 09:09:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.692 09:09:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.692 09:09:30 -- scripts/common.sh@368 -- # return 0 00:03:38.692 09:09:30 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.692 09:09:30 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.692 --rc genhtml_branch_coverage=1 00:03:38.692 --rc genhtml_function_coverage=1 00:03:38.692 --rc genhtml_legend=1 00:03:38.692 --rc geninfo_all_blocks=1 00:03:38.692 --rc geninfo_unexecuted_blocks=1 00:03:38.692 00:03:38.692 ' 00:03:38.692 09:09:30 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.692 --rc genhtml_branch_coverage=1 00:03:38.692 --rc genhtml_function_coverage=1 00:03:38.692 --rc genhtml_legend=1 00:03:38.692 --rc geninfo_all_blocks=1 00:03:38.692 --rc geninfo_unexecuted_blocks=1 00:03:38.692 00:03:38.692 ' 00:03:38.692 09:09:30 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.692 --rc genhtml_branch_coverage=1 00:03:38.692 --rc genhtml_function_coverage=1 00:03:38.692 --rc genhtml_legend=1 00:03:38.692 --rc geninfo_all_blocks=1 00:03:38.692 --rc geninfo_unexecuted_blocks=1 00:03:38.692 00:03:38.692 ' 00:03:38.692 09:09:30 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:38.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.692 --rc genhtml_branch_coverage=1 00:03:38.692 --rc genhtml_function_coverage=1 00:03:38.692 --rc genhtml_legend=1 00:03:38.692 --rc geninfo_all_blocks=1 00:03:38.692 --rc geninfo_unexecuted_blocks=1 00:03:38.692 00:03:38.692 ' 00:03:38.692 09:09:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:38.692 09:09:30 -- nvmf/common.sh@7 -- # uname -s 00:03:38.692 09:09:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:38.692 09:09:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:38.692 09:09:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:38.692 09:09:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:38.692 09:09:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:38.692 09:09:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:38.692 09:09:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:38.692 09:09:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:38.692 09:09:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:38.692 09:09:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:38.692 09:09:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:03:38.692 09:09:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:03:38.692 09:09:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:38.692 09:09:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:38.692 09:09:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:38.692 09:09:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:38.692 09:09:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:38.692 09:09:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:38.692 09:09:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:38.692 09:09:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.692 09:09:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.692 09:09:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.692 09:09:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.692 09:09:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.692 09:09:30 -- paths/export.sh@5 -- # export PATH 00:03:38.692 09:09:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.692 09:09:30 -- nvmf/common.sh@51 -- # : 0 00:03:38.692 09:09:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:38.692 09:09:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:38.692 09:09:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:38.692 09:09:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:38.692 09:09:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:38.692 09:09:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:38.692 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:38.692 09:09:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:38.692 09:09:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:38.692 09:09:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:38.692 09:09:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:38.692 09:09:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:38.692 09:09:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:38.692 09:09:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:38.692 09:09:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.692 09:09:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:38.692 09:09:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.692 09:09:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:38.951 09:09:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:38.951 09:09:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:38.951 09:09:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54567 00:03:38.951 09:09:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:38.951 09:09:30 -- pm/common@17 -- # local monitor 00:03:38.951 09:09:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.951 09:09:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:38.951 09:09:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.951 09:09:30 -- pm/common@25 -- # sleep 1 00:03:38.951 09:09:30 -- pm/common@21 -- # date +%s 00:03:38.951 09:09:30 -- pm/common@21 -- # date +%s 00:03:38.951 09:09:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728378570 00:03:38.951 09:09:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728378570 00:03:38.951 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728378570_collect-cpu-load.pm.log 00:03:38.951 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728378570_collect-vmstat.pm.log 00:03:39.886 09:09:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:39.886 09:09:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:39.886 09:09:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.886 09:09:31 -- common/autotest_common.sh@10 -- # set +x 00:03:39.886 09:09:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:39.886 09:09:31 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:39.886 09:09:31 -- common/autotest_common.sh@10 -- # set +x 00:03:39.886 09:09:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:39.886 09:09:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:39.886 09:09:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:39.886 09:09:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:39.886 09:09:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:39.886 09:09:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:39.886 09:09:31 -- common/autotest_common.sh@1455 -- # uname 00:03:39.886 09:09:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:39.886 09:09:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:39.886 09:09:31 -- common/autotest_common.sh@1475 -- # uname 00:03:39.886 09:09:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:39.886 09:09:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:39.886 09:09:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:39.886 lcov: LCOV version 1.15 00:03:39.886 09:09:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:58.016 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:58.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:16.123 09:10:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:16.123 09:10:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.123 09:10:04 -- common/autotest_common.sh@10 -- # set +x 00:04:16.123 09:10:04 -- spdk/autotest.sh@78 -- # rm -f 00:04:16.123 09:10:04 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.123 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:16.123 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:16.124 09:10:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:16.124 09:10:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:16.124 09:10:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:16.124 09:10:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:16.124 09:10:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.124 09:10:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:16.124 09:10:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:16.124 09:10:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.124 09:10:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.124 09:10:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.124 09:10:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:16.124 09:10:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:16.124 09:10:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.124 09:10:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.124 09:10:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.124 09:10:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:16.124 09:10:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:16.124 09:10:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:16.124 09:10:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.124 09:10:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.124 09:10:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:16.124 09:10:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:16.124 09:10:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:16.124 09:10:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.124 09:10:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:16.124 09:10:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.124 09:10:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.124 09:10:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:16.124 09:10:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:16.124 09:10:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.124 No valid GPT data, bailing 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # pt= 00:04:16.124 09:10:05 -- scripts/common.sh@395 -- # return 1 00:04:16.124 09:10:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.124 1+0 records in 00:04:16.124 1+0 records out 00:04:16.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059137 s, 177 MB/s 00:04:16.124 09:10:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.124 09:10:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.124 09:10:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:16.124 09:10:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:16.124 09:10:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:16.124 No valid GPT data, bailing 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # pt= 00:04:16.124 09:10:05 -- scripts/common.sh@395 -- # return 1 00:04:16.124 09:10:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:16.124 1+0 records in 00:04:16.124 1+0 records out 00:04:16.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523188 s, 200 MB/s 00:04:16.124 09:10:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.124 09:10:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.124 09:10:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:16.124 09:10:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:16.124 09:10:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:16.124 No valid GPT data, bailing 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # pt= 00:04:16.124 09:10:05 -- scripts/common.sh@395 -- # return 1 00:04:16.124 09:10:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:16.124 1+0 records in 00:04:16.124 1+0 records out 00:04:16.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458761 s, 229 MB/s 00:04:16.124 09:10:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.124 09:10:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.124 09:10:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:16.124 09:10:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:16.124 09:10:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:16.124 No valid GPT data, bailing 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:16.124 09:10:05 -- scripts/common.sh@394 -- # pt= 00:04:16.124 09:10:05 -- scripts/common.sh@395 -- # return 1 00:04:16.124 09:10:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:16.124 1+0 records in 00:04:16.124 1+0 records out 00:04:16.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406337 s, 258 MB/s 00:04:16.124 09:10:05 -- spdk/autotest.sh@105 -- # sync 00:04:16.124 09:10:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.124 09:10:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.124 09:10:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:16.124 09:10:07 -- spdk/autotest.sh@111 -- # uname -s 00:04:16.124 09:10:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:16.124 09:10:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:16.124 09:10:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:17.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.089 Hugepages 00:04:17.089 node hugesize free / total 00:04:17.089 node0 1048576kB 0 / 0 00:04:17.089 node0 2048kB 0 / 0 00:04:17.089 00:04:17.089 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:17.089 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:17.089 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:17.089 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:17.089 09:10:08 -- spdk/autotest.sh@117 -- # uname -s 00:04:17.089 09:10:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:17.089 09:10:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:17.089 09:10:08 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.916 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.917 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.917 09:10:09 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:19.295 09:10:10 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:19.295 09:10:10 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:19.295 09:10:10 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.295 09:10:10 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:19.295 09:10:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:19.295 09:10:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:19.295 09:10:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.295 09:10:10 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.295 09:10:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:19.295 09:10:10 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:19.295 09:10:10 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:19.295 09:10:10 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.554 Waiting for block devices as requested 00:04:19.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:19.554 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:19.554 09:10:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:19.554 09:10:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:19.554 09:10:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:19.554 09:10:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:19.554 09:10:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:19.554 09:10:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:19.554 09:10:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:19.554 09:10:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:19.554 09:10:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:19.554 09:10:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:19.554 09:10:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:19.554 09:10:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:19.554 09:10:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:19.554 09:10:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:19.554 09:10:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:19.554 09:10:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:19.813 09:10:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:19.813 09:10:11 -- common/autotest_common.sh@1541 -- # continue 00:04:19.813 09:10:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:19.813 09:10:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:19.813 09:10:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:19.813 09:10:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:19.813 09:10:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:19.813 09:10:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:19.813 09:10:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:19.813 09:10:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:19.813 09:10:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:19.813 09:10:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:19.813 09:10:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:19.813 09:10:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:19.813 09:10:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:19.813 09:10:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:19.813 09:10:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:19.813 09:10:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:19.813 09:10:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:19.813 09:10:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:19.813 09:10:11 -- common/autotest_common.sh@1541 -- # continue 00:04:19.813 09:10:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:19.813 09:10:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:19.813 09:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:19.813 09:10:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:19.813 09:10:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.813 09:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:19.813 09:10:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.638 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.638 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.638 09:10:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:20.638 09:10:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.638 09:10:12 -- common/autotest_common.sh@10 -- # set +x 00:04:20.638 09:10:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:20.638 09:10:12 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:20.638 09:10:12 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:20.638 09:10:12 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:20.638 09:10:12 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:20.638 09:10:12 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:20.638 09:10:12 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:20.638 09:10:12 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:20.638 09:10:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:20.638 09:10:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:20.638 09:10:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.638 09:10:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.638 09:10:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:20.638 09:10:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:20.638 09:10:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:20.638 09:10:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:20.638 09:10:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:20.638 09:10:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:20.638 09:10:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.638 09:10:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:20.638 09:10:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:20.638 09:10:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:20.638 09:10:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.638 09:10:12 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:20.638 09:10:12 -- common/autotest_common.sh@1570 -- # return 0 00:04:20.638 09:10:12 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:20.638 09:10:12 -- common/autotest_common.sh@1578 -- # return 0 00:04:20.638 09:10:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.638 09:10:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.638 09:10:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.638 09:10:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.638 09:10:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.638 09:10:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.638 09:10:12 -- common/autotest_common.sh@10 -- # set +x 00:04:20.638 09:10:12 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:20.639 09:10:12 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:20.639 09:10:12 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:20.639 09:10:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.639 09:10:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.639 09:10:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.639 09:10:12 -- common/autotest_common.sh@10 -- # set +x 00:04:20.639 ************************************ 00:04:20.639 START TEST env 00:04:20.639 ************************************ 00:04:20.639 09:10:12 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.898 * Looking for test storage... 00:04:20.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:20.898 09:10:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.898 09:10:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.898 09:10:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.898 09:10:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.898 09:10:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.898 09:10:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.898 09:10:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.898 09:10:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.898 09:10:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.898 09:10:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.898 09:10:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.898 09:10:12 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.898 09:10:12 env -- scripts/common.sh@345 -- # : 1 00:04:20.898 09:10:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.898 09:10:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.898 09:10:12 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.898 09:10:12 env -- scripts/common.sh@353 -- # local d=1 00:04:20.898 09:10:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.898 09:10:12 env -- scripts/common.sh@355 -- # echo 1 00:04:20.898 09:10:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.898 09:10:12 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.898 09:10:12 env -- scripts/common.sh@353 -- # local d=2 00:04:20.898 09:10:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.898 09:10:12 env -- scripts/common.sh@355 -- # echo 2 00:04:20.898 09:10:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.898 09:10:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.898 09:10:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.898 09:10:12 env -- scripts/common.sh@368 -- # return 0 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:20.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.898 --rc genhtml_branch_coverage=1 00:04:20.898 --rc genhtml_function_coverage=1 00:04:20.898 --rc genhtml_legend=1 00:04:20.898 --rc geninfo_all_blocks=1 00:04:20.898 --rc geninfo_unexecuted_blocks=1 00:04:20.898 00:04:20.898 ' 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:20.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.898 --rc genhtml_branch_coverage=1 00:04:20.898 --rc genhtml_function_coverage=1 00:04:20.898 --rc genhtml_legend=1 00:04:20.898 --rc geninfo_all_blocks=1 00:04:20.898 --rc geninfo_unexecuted_blocks=1 00:04:20.898 00:04:20.898 ' 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:20.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.898 --rc genhtml_branch_coverage=1 00:04:20.898 --rc genhtml_function_coverage=1 00:04:20.898 --rc genhtml_legend=1 00:04:20.898 --rc geninfo_all_blocks=1 00:04:20.898 --rc geninfo_unexecuted_blocks=1 00:04:20.898 00:04:20.898 ' 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:20.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.898 --rc genhtml_branch_coverage=1 00:04:20.898 --rc genhtml_function_coverage=1 00:04:20.898 --rc genhtml_legend=1 00:04:20.898 --rc geninfo_all_blocks=1 00:04:20.898 --rc geninfo_unexecuted_blocks=1 00:04:20.898 00:04:20.898 ' 00:04:20.898 09:10:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.898 09:10:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.898 09:10:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.898 ************************************ 00:04:20.898 START TEST env_memory 00:04:20.898 ************************************ 00:04:20.898 09:10:12 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.898 00:04:20.898 00:04:20.898 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.898 http://cunit.sourceforge.net/ 00:04:20.898 00:04:20.898 00:04:20.898 Suite: memory 00:04:20.898 Test: alloc and free memory map ...[2024-10-08 09:10:12.562572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.898 passed 00:04:21.158 Test: mem map translation ...[2024-10-08 09:10:12.594315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:21.158 [2024-10-08 09:10:12.594365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:21.158 [2024-10-08 09:10:12.594440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:21.158 [2024-10-08 09:10:12.594464] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:21.158 passed 00:04:21.158 Test: mem map registration ...[2024-10-08 09:10:12.658244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:21.158 [2024-10-08 09:10:12.658290] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:21.158 passed 00:04:21.158 Test: mem map adjacent registrations ...passed 00:04:21.158 00:04:21.158 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.158 suites 1 1 n/a 0 0 00:04:21.158 tests 4 4 4 0 0 00:04:21.158 asserts 152 152 152 0 n/a 00:04:21.158 00:04:21.158 Elapsed time = 0.214 seconds 00:04:21.158 ************************************ 00:04:21.158 END TEST env_memory 00:04:21.158 ************************************ 00:04:21.158 00:04:21.158 real 0m0.227s 00:04:21.158 user 0m0.213s 00:04:21.158 sys 0m0.011s 00:04:21.158 09:10:12 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.158 09:10:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:21.158 09:10:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:21.158 09:10:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.158 09:10:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.158 09:10:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.158 ************************************ 00:04:21.158 START TEST env_vtophys 00:04:21.158 ************************************ 00:04:21.158 09:10:12 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:21.158 EAL: lib.eal log level changed from notice to debug 00:04:21.158 EAL: Detected lcore 0 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 1 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 2 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 3 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 4 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 5 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 6 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 7 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 8 as core 0 on socket 0 00:04:21.158 EAL: Detected lcore 9 as core 0 on socket 0 00:04:21.158 EAL: Maximum logical cores by configuration: 128 00:04:21.158 EAL: Detected CPU lcores: 10 00:04:21.158 EAL: Detected NUMA nodes: 1 00:04:21.158 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:21.158 EAL: Detected shared linkage of DPDK 00:04:21.158 EAL: No shared files mode enabled, IPC will be disabled 00:04:21.158 EAL: Selected IOVA mode 'PA' 00:04:21.158 EAL: Probing VFIO support... 00:04:21.158 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:21.158 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:21.158 EAL: Ask a virtual area of 0x2e000 bytes 00:04:21.158 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:21.158 EAL: Setting up physically contiguous memory... 00:04:21.158 EAL: Setting maximum number of open files to 524288 00:04:21.158 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:21.158 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:21.158 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.158 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:21.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.158 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.158 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:21.158 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:21.158 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.158 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:21.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.158 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.158 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:21.158 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:21.158 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.158 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:21.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.158 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.158 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:21.158 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:21.158 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.158 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:21.158 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.158 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.158 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:21.158 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:21.158 EAL: Hugepages will be freed exactly as allocated. 00:04:21.158 EAL: No shared files mode enabled, IPC is disabled 00:04:21.158 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: TSC frequency is ~2200000 KHz 00:04:21.418 EAL: Main lcore 0 is ready (tid=7f3517afda00;cpuset=[0]) 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 0 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.418 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:21.418 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:21.418 EAL: Mem event callback 'spdk:(nil)' registered 00:04:21.418 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:21.418 00:04:21.418 00:04:21.418 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.418 http://cunit.sourceforge.net/ 00:04:21.418 00:04:21.418 00:04:21.418 Suite: components_suite 00:04:21.418 Test: vtophys_malloc_test ...passed 00:04:21.418 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 4 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 4 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 4 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 4 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 4 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 4 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.418 EAL: Restoring previous memory policy: 4 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.418 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.418 EAL: request: mp_malloc_sync 00:04:21.418 EAL: No shared files mode enabled, IPC is disabled 00:04:21.418 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.418 EAL: Trying to obtain current memory policy. 00:04:21.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.678 EAL: Restoring previous memory policy: 4 00:04:21.678 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.678 EAL: request: mp_malloc_sync 00:04:21.678 EAL: No shared files mode enabled, IPC is disabled 00:04:21.678 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.678 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.678 EAL: request: mp_malloc_sync 00:04:21.678 EAL: No shared files mode enabled, IPC is disabled 00:04:21.678 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.678 EAL: Trying to obtain current memory policy. 00:04:21.678 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.938 EAL: Restoring previous memory policy: 4 00:04:21.938 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.938 EAL: request: mp_malloc_sync 00:04:21.938 EAL: No shared files mode enabled, IPC is disabled 00:04:21.938 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.938 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.938 EAL: request: mp_malloc_sync 00:04:21.938 EAL: No shared files mode enabled, IPC is disabled 00:04:21.938 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.938 EAL: Trying to obtain current memory policy. 00:04:21.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.218 EAL: Restoring previous memory policy: 4 00:04:22.218 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.218 EAL: request: mp_malloc_sync 00:04:22.218 EAL: No shared files mode enabled, IPC is disabled 00:04:22.218 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.745 passed 00:04:22.745 00:04:22.745 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.745 suites 1 1 n/a 0 0 00:04:22.745 tests 2 2 2 0 0 00:04:22.745 asserts 5666 5666 5666 0 n/a 00:04:22.745 00:04:22.745 Elapsed time = 1.262 seconds 00:04:22.745 EAL: request: mp_malloc_sync 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:22.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.745 EAL: request: mp_malloc_sync 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 EAL: No shared files mode enabled, IPC is disabled 00:04:22.745 00:04:22.745 real 0m1.465s 00:04:22.745 user 0m0.800s 00:04:22.745 sys 0m0.528s 00:04:22.745 09:10:14 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.745 09:10:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:22.745 ************************************ 00:04:22.745 END TEST env_vtophys 00:04:22.745 ************************************ 00:04:22.745 09:10:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:22.745 09:10:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.745 09:10:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.745 09:10:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.745 ************************************ 00:04:22.745 START TEST env_pci 00:04:22.745 ************************************ 00:04:22.745 09:10:14 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:22.745 00:04:22.745 00:04:22.745 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.745 http://cunit.sourceforge.net/ 00:04:22.745 00:04:22.745 00:04:22.745 Suite: pci 00:04:22.745 Test: pci_hook ...[2024-10-08 09:10:14.331350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56812 has claimed it 00:04:22.745 passed 00:04:22.745 00:04:22.745 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.745 suites 1 1 n/a 0 0 00:04:22.745 EAL: Cannot find device (10000:00:01.0) 00:04:22.745 EAL: Failed to attach device on primary process 00:04:22.745 tests 1 1 1 0 0 00:04:22.745 asserts 25 25 25 0 n/a 00:04:22.745 00:04:22.745 Elapsed time = 0.002 seconds 00:04:22.745 00:04:22.745 real 0m0.020s 00:04:22.745 user 0m0.008s 00:04:22.745 sys 0m0.012s 00:04:22.745 09:10:14 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.745 09:10:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.745 ************************************ 00:04:22.745 END TEST env_pci 00:04:22.745 ************************************ 00:04:22.745 09:10:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.745 09:10:14 env -- env/env.sh@15 -- # uname 00:04:22.745 09:10:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.746 09:10:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.746 09:10:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.746 09:10:14 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:22.746 09:10:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.746 09:10:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.746 ************************************ 00:04:22.746 START TEST env_dpdk_post_init 00:04:22.746 ************************************ 00:04:22.746 09:10:14 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.005 EAL: Detected CPU lcores: 10 00:04:23.005 EAL: Detected NUMA nodes: 1 00:04:23.005 EAL: Detected shared linkage of DPDK 00:04:23.005 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.005 EAL: Selected IOVA mode 'PA' 00:04:23.005 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.005 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:23.005 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:23.005 Starting DPDK initialization... 00:04:23.005 Starting SPDK post initialization... 00:04:23.005 SPDK NVMe probe 00:04:23.005 Attaching to 0000:00:10.0 00:04:23.005 Attaching to 0000:00:11.0 00:04:23.005 Attached to 0000:00:10.0 00:04:23.005 Attached to 0000:00:11.0 00:04:23.005 Cleaning up... 00:04:23.005 00:04:23.005 real 0m0.177s 00:04:23.005 user 0m0.045s 00:04:23.005 sys 0m0.032s 00:04:23.005 09:10:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.005 09:10:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.005 ************************************ 00:04:23.005 END TEST env_dpdk_post_init 00:04:23.005 ************************************ 00:04:23.005 09:10:14 env -- env/env.sh@26 -- # uname 00:04:23.005 09:10:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:23.005 09:10:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.005 09:10:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.005 09:10:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.005 09:10:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.005 ************************************ 00:04:23.005 START TEST env_mem_callbacks 00:04:23.005 ************************************ 00:04:23.005 09:10:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:23.005 EAL: Detected CPU lcores: 10 00:04:23.005 EAL: Detected NUMA nodes: 1 00:04:23.005 EAL: Detected shared linkage of DPDK 00:04:23.005 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.005 EAL: Selected IOVA mode 'PA' 00:04:23.264 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.264 00:04:23.264 00:04:23.264 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.264 http://cunit.sourceforge.net/ 00:04:23.264 00:04:23.264 00:04:23.264 Suite: memory 00:04:23.264 Test: test ... 00:04:23.264 register 0x200000200000 2097152 00:04:23.264 malloc 3145728 00:04:23.264 register 0x200000400000 4194304 00:04:23.264 buf 0x200000500000 len 3145728 PASSED 00:04:23.264 malloc 64 00:04:23.264 buf 0x2000004fff40 len 64 PASSED 00:04:23.264 malloc 4194304 00:04:23.264 register 0x200000800000 6291456 00:04:23.264 buf 0x200000a00000 len 4194304 PASSED 00:04:23.264 free 0x200000500000 3145728 00:04:23.264 free 0x2000004fff40 64 00:04:23.264 unregister 0x200000400000 4194304 PASSED 00:04:23.264 free 0x200000a00000 4194304 00:04:23.264 unregister 0x200000800000 6291456 PASSED 00:04:23.264 malloc 8388608 00:04:23.264 register 0x200000400000 10485760 00:04:23.264 buf 0x200000600000 len 8388608 PASSED 00:04:23.264 free 0x200000600000 8388608 00:04:23.264 unregister 0x200000400000 10485760 PASSED 00:04:23.264 passed 00:04:23.264 00:04:23.264 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.264 suites 1 1 n/a 0 0 00:04:23.264 tests 1 1 1 0 0 00:04:23.264 asserts 15 15 15 0 n/a 00:04:23.264 00:04:23.264 Elapsed time = 0.009 seconds 00:04:23.264 00:04:23.264 real 0m0.145s 00:04:23.264 user 0m0.022s 00:04:23.264 sys 0m0.022s 00:04:23.264 09:10:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.264 09:10:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:23.264 ************************************ 00:04:23.264 END TEST env_mem_callbacks 00:04:23.264 ************************************ 00:04:23.264 00:04:23.264 real 0m2.512s 00:04:23.264 user 0m1.295s 00:04:23.264 sys 0m0.862s 00:04:23.264 09:10:14 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.264 ************************************ 00:04:23.264 END TEST env 00:04:23.264 ************************************ 00:04:23.264 09:10:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.264 09:10:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:23.264 09:10:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.264 09:10:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.264 09:10:14 -- common/autotest_common.sh@10 -- # set +x 00:04:23.264 ************************************ 00:04:23.264 START TEST rpc 00:04:23.264 ************************************ 00:04:23.264 09:10:14 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:23.264 * Looking for test storage... 00:04:23.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.523 09:10:14 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:23.523 09:10:14 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:23.523 09:10:14 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.523 09:10:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.523 09:10:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.523 09:10:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.523 09:10:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.523 09:10:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.523 09:10:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.523 09:10:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.523 09:10:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.523 09:10:15 rpc -- scripts/common.sh@345 -- # : 1 00:04:23.523 09:10:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.523 09:10:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.523 09:10:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.523 09:10:15 rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.523 09:10:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.523 09:10:15 rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.523 09:10:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.523 09:10:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.523 09:10:15 rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.523 09:10:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.523 09:10:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.523 09:10:15 rpc -- scripts/common.sh@368 -- # return 0 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:23.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.523 --rc genhtml_branch_coverage=1 00:04:23.523 --rc genhtml_function_coverage=1 00:04:23.523 --rc genhtml_legend=1 00:04:23.523 --rc geninfo_all_blocks=1 00:04:23.523 --rc geninfo_unexecuted_blocks=1 00:04:23.523 00:04:23.523 ' 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:23.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.523 --rc genhtml_branch_coverage=1 00:04:23.523 --rc genhtml_function_coverage=1 00:04:23.523 --rc genhtml_legend=1 00:04:23.523 --rc geninfo_all_blocks=1 00:04:23.523 --rc geninfo_unexecuted_blocks=1 00:04:23.523 00:04:23.523 ' 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:23.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.523 --rc genhtml_branch_coverage=1 00:04:23.523 --rc genhtml_function_coverage=1 00:04:23.523 --rc genhtml_legend=1 00:04:23.523 --rc geninfo_all_blocks=1 00:04:23.523 --rc geninfo_unexecuted_blocks=1 00:04:23.523 00:04:23.523 ' 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:23.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.523 --rc genhtml_branch_coverage=1 00:04:23.523 --rc genhtml_function_coverage=1 00:04:23.523 --rc genhtml_legend=1 00:04:23.523 --rc geninfo_all_blocks=1 00:04:23.523 --rc geninfo_unexecuted_blocks=1 00:04:23.523 00:04:23.523 ' 00:04:23.523 09:10:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56935 00:04:23.523 09:10:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.523 09:10:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56935 00:04:23.523 09:10:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@831 -- # '[' -z 56935 ']' 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.523 09:10:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.523 [2024-10-08 09:10:15.130126] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:23.523 [2024-10-08 09:10:15.130248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56935 ] 00:04:23.783 [2024-10-08 09:10:15.271327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.783 [2024-10-08 09:10:15.401649] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:23.783 [2024-10-08 09:10:15.401723] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56935' to capture a snapshot of events at runtime. 00:04:23.783 [2024-10-08 09:10:15.401753] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:23.783 [2024-10-08 09:10:15.401763] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:23.783 [2024-10-08 09:10:15.401771] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56935 for offline analysis/debug. 00:04:23.783 [2024-10-08 09:10:15.402359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.042 [2024-10-08 09:10:15.476265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:24.609 09:10:16 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.609 09:10:16 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:24.609 09:10:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.609 09:10:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.609 09:10:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:24.609 09:10:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:24.609 09:10:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.609 09:10:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.609 09:10:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.609 ************************************ 00:04:24.609 START TEST rpc_integrity 00:04:24.609 ************************************ 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.609 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.609 { 00:04:24.609 "name": "Malloc0", 00:04:24.609 "aliases": [ 00:04:24.609 "bef334c9-3383-49bf-9c57-301b02ce0b33" 00:04:24.609 ], 00:04:24.609 "product_name": "Malloc disk", 00:04:24.609 "block_size": 512, 00:04:24.609 "num_blocks": 16384, 00:04:24.609 "uuid": "bef334c9-3383-49bf-9c57-301b02ce0b33", 00:04:24.609 "assigned_rate_limits": { 00:04:24.609 "rw_ios_per_sec": 0, 00:04:24.609 "rw_mbytes_per_sec": 0, 00:04:24.609 "r_mbytes_per_sec": 0, 00:04:24.609 "w_mbytes_per_sec": 0 00:04:24.609 }, 00:04:24.609 "claimed": false, 00:04:24.609 "zoned": false, 00:04:24.609 "supported_io_types": { 00:04:24.609 "read": true, 00:04:24.609 "write": true, 00:04:24.609 "unmap": true, 00:04:24.609 "flush": true, 00:04:24.609 "reset": true, 00:04:24.609 "nvme_admin": false, 00:04:24.609 "nvme_io": false, 00:04:24.609 "nvme_io_md": false, 00:04:24.609 "write_zeroes": true, 00:04:24.609 "zcopy": true, 00:04:24.609 "get_zone_info": false, 00:04:24.609 "zone_management": false, 00:04:24.609 "zone_append": false, 00:04:24.609 "compare": false, 00:04:24.609 "compare_and_write": false, 00:04:24.609 "abort": true, 00:04:24.609 "seek_hole": false, 00:04:24.609 "seek_data": false, 00:04:24.609 "copy": true, 00:04:24.609 "nvme_iov_md": false 00:04:24.609 }, 00:04:24.609 "memory_domains": [ 00:04:24.609 { 00:04:24.609 "dma_device_id": "system", 00:04:24.609 "dma_device_type": 1 00:04:24.609 }, 00:04:24.609 { 00:04:24.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.609 "dma_device_type": 2 00:04:24.609 } 00:04:24.609 ], 00:04:24.609 "driver_specific": {} 00:04:24.609 } 00:04:24.609 ]' 00:04:24.609 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.868 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.868 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:24.868 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.868 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.868 [2024-10-08 09:10:16.345529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:24.868 [2024-10-08 09:10:16.345579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.868 [2024-10-08 09:10:16.345617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11ab120 00:04:24.868 [2024-10-08 09:10:16.345634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.868 [2024-10-08 09:10:16.347140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.868 [2024-10-08 09:10:16.347220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.868 Passthru0 00:04:24.868 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.868 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.868 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.868 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.868 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.868 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.868 { 00:04:24.868 "name": "Malloc0", 00:04:24.868 "aliases": [ 00:04:24.868 "bef334c9-3383-49bf-9c57-301b02ce0b33" 00:04:24.868 ], 00:04:24.868 "product_name": "Malloc disk", 00:04:24.868 "block_size": 512, 00:04:24.868 "num_blocks": 16384, 00:04:24.868 "uuid": "bef334c9-3383-49bf-9c57-301b02ce0b33", 00:04:24.868 "assigned_rate_limits": { 00:04:24.868 "rw_ios_per_sec": 0, 00:04:24.869 "rw_mbytes_per_sec": 0, 00:04:24.869 "r_mbytes_per_sec": 0, 00:04:24.869 "w_mbytes_per_sec": 0 00:04:24.869 }, 00:04:24.869 "claimed": true, 00:04:24.869 "claim_type": "exclusive_write", 00:04:24.869 "zoned": false, 00:04:24.869 "supported_io_types": { 00:04:24.869 "read": true, 00:04:24.869 "write": true, 00:04:24.869 "unmap": true, 00:04:24.869 "flush": true, 00:04:24.869 "reset": true, 00:04:24.869 "nvme_admin": false, 00:04:24.869 "nvme_io": false, 00:04:24.869 "nvme_io_md": false, 00:04:24.869 "write_zeroes": true, 00:04:24.869 "zcopy": true, 00:04:24.869 "get_zone_info": false, 00:04:24.869 "zone_management": false, 00:04:24.869 "zone_append": false, 00:04:24.869 "compare": false, 00:04:24.869 "compare_and_write": false, 00:04:24.869 "abort": true, 00:04:24.869 "seek_hole": false, 00:04:24.869 "seek_data": false, 00:04:24.869 "copy": true, 00:04:24.869 "nvme_iov_md": false 00:04:24.869 }, 00:04:24.869 "memory_domains": [ 00:04:24.869 { 00:04:24.869 "dma_device_id": "system", 00:04:24.869 "dma_device_type": 1 00:04:24.869 }, 00:04:24.869 { 00:04:24.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.869 "dma_device_type": 2 00:04:24.869 } 00:04:24.869 ], 00:04:24.869 "driver_specific": {} 00:04:24.869 }, 00:04:24.869 { 00:04:24.869 "name": "Passthru0", 00:04:24.869 "aliases": [ 00:04:24.869 "c44cc652-cb80-5fab-af37-37236dbc9767" 00:04:24.869 ], 00:04:24.869 "product_name": "passthru", 00:04:24.869 "block_size": 512, 00:04:24.869 "num_blocks": 16384, 00:04:24.869 "uuid": "c44cc652-cb80-5fab-af37-37236dbc9767", 00:04:24.869 "assigned_rate_limits": { 00:04:24.869 "rw_ios_per_sec": 0, 00:04:24.869 "rw_mbytes_per_sec": 0, 00:04:24.869 "r_mbytes_per_sec": 0, 00:04:24.869 "w_mbytes_per_sec": 0 00:04:24.869 }, 00:04:24.869 "claimed": false, 00:04:24.869 "zoned": false, 00:04:24.869 "supported_io_types": { 00:04:24.869 "read": true, 00:04:24.869 "write": true, 00:04:24.869 "unmap": true, 00:04:24.869 "flush": true, 00:04:24.869 "reset": true, 00:04:24.869 "nvme_admin": false, 00:04:24.869 "nvme_io": false, 00:04:24.869 "nvme_io_md": false, 00:04:24.869 "write_zeroes": true, 00:04:24.869 "zcopy": true, 00:04:24.869 "get_zone_info": false, 00:04:24.869 "zone_management": false, 00:04:24.869 "zone_append": false, 00:04:24.869 "compare": false, 00:04:24.869 "compare_and_write": false, 00:04:24.869 "abort": true, 00:04:24.869 "seek_hole": false, 00:04:24.869 "seek_data": false, 00:04:24.869 "copy": true, 00:04:24.869 "nvme_iov_md": false 00:04:24.869 }, 00:04:24.869 "memory_domains": [ 00:04:24.869 { 00:04:24.869 "dma_device_id": "system", 00:04:24.869 "dma_device_type": 1 00:04:24.869 }, 00:04:24.869 { 00:04:24.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.869 "dma_device_type": 2 00:04:24.869 } 00:04:24.869 ], 00:04:24.869 "driver_specific": { 00:04:24.869 "passthru": { 00:04:24.869 "name": "Passthru0", 00:04:24.869 "base_bdev_name": "Malloc0" 00:04:24.869 } 00:04:24.869 } 00:04:24.869 } 00:04:24.869 ]' 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.869 09:10:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.869 00:04:24.869 real 0m0.329s 00:04:24.869 user 0m0.216s 00:04:24.869 sys 0m0.042s 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.869 09:10:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.869 ************************************ 00:04:24.869 END TEST rpc_integrity 00:04:24.869 ************************************ 00:04:25.128 09:10:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:25.128 09:10:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.128 09:10:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.128 09:10:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 ************************************ 00:04:25.128 START TEST rpc_plugins 00:04:25.128 ************************************ 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:25.128 { 00:04:25.128 "name": "Malloc1", 00:04:25.128 "aliases": [ 00:04:25.128 "022ae278-cb92-4728-aadc-cb05fe3ad363" 00:04:25.128 ], 00:04:25.128 "product_name": "Malloc disk", 00:04:25.128 "block_size": 4096, 00:04:25.128 "num_blocks": 256, 00:04:25.128 "uuid": "022ae278-cb92-4728-aadc-cb05fe3ad363", 00:04:25.128 "assigned_rate_limits": { 00:04:25.128 "rw_ios_per_sec": 0, 00:04:25.128 "rw_mbytes_per_sec": 0, 00:04:25.128 "r_mbytes_per_sec": 0, 00:04:25.128 "w_mbytes_per_sec": 0 00:04:25.128 }, 00:04:25.128 "claimed": false, 00:04:25.128 "zoned": false, 00:04:25.128 "supported_io_types": { 00:04:25.128 "read": true, 00:04:25.128 "write": true, 00:04:25.128 "unmap": true, 00:04:25.128 "flush": true, 00:04:25.128 "reset": true, 00:04:25.128 "nvme_admin": false, 00:04:25.128 "nvme_io": false, 00:04:25.128 "nvme_io_md": false, 00:04:25.128 "write_zeroes": true, 00:04:25.128 "zcopy": true, 00:04:25.128 "get_zone_info": false, 00:04:25.128 "zone_management": false, 00:04:25.128 "zone_append": false, 00:04:25.128 "compare": false, 00:04:25.128 "compare_and_write": false, 00:04:25.128 "abort": true, 00:04:25.128 "seek_hole": false, 00:04:25.128 "seek_data": false, 00:04:25.128 "copy": true, 00:04:25.128 "nvme_iov_md": false 00:04:25.128 }, 00:04:25.128 "memory_domains": [ 00:04:25.128 { 00:04:25.128 "dma_device_id": "system", 00:04:25.128 "dma_device_type": 1 00:04:25.128 }, 00:04:25.128 { 00:04:25.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.128 "dma_device_type": 2 00:04:25.128 } 00:04:25.128 ], 00:04:25.128 "driver_specific": {} 00:04:25.128 } 00:04:25.128 ]' 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:25.128 09:10:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:25.128 00:04:25.128 real 0m0.171s 00:04:25.128 user 0m0.111s 00:04:25.128 sys 0m0.023s 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.128 ************************************ 00:04:25.128 09:10:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 END TEST rpc_plugins 00:04:25.128 ************************************ 00:04:25.128 09:10:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:25.128 09:10:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.128 09:10:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.128 09:10:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.128 ************************************ 00:04:25.128 START TEST rpc_trace_cmd_test 00:04:25.128 ************************************ 00:04:25.128 09:10:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:25.128 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:25.128 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:25.128 09:10:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.128 09:10:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:25.386 09:10:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.386 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:25.387 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56935", 00:04:25.387 "tpoint_group_mask": "0x8", 00:04:25.387 "iscsi_conn": { 00:04:25.387 "mask": "0x2", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "scsi": { 00:04:25.387 "mask": "0x4", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "bdev": { 00:04:25.387 "mask": "0x8", 00:04:25.387 "tpoint_mask": "0xffffffffffffffff" 00:04:25.387 }, 00:04:25.387 "nvmf_rdma": { 00:04:25.387 "mask": "0x10", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "nvmf_tcp": { 00:04:25.387 "mask": "0x20", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "ftl": { 00:04:25.387 "mask": "0x40", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "blobfs": { 00:04:25.387 "mask": "0x80", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "dsa": { 00:04:25.387 "mask": "0x200", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "thread": { 00:04:25.387 "mask": "0x400", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "nvme_pcie": { 00:04:25.387 "mask": "0x800", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "iaa": { 00:04:25.387 "mask": "0x1000", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "nvme_tcp": { 00:04:25.387 "mask": "0x2000", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "bdev_nvme": { 00:04:25.387 "mask": "0x4000", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "sock": { 00:04:25.387 "mask": "0x8000", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "blob": { 00:04:25.387 "mask": "0x10000", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "bdev_raid": { 00:04:25.387 "mask": "0x20000", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 }, 00:04:25.387 "scheduler": { 00:04:25.387 "mask": "0x40000", 00:04:25.387 "tpoint_mask": "0x0" 00:04:25.387 } 00:04:25.387 }' 00:04:25.387 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:25.387 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:25.387 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:25.387 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:25.387 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:25.387 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:25.387 09:10:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:25.387 09:10:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:25.387 09:10:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:25.387 09:10:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:25.387 00:04:25.387 real 0m0.277s 00:04:25.387 user 0m0.234s 00:04:25.387 sys 0m0.028s 00:04:25.387 09:10:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.387 ************************************ 00:04:25.387 END TEST rpc_trace_cmd_test 00:04:25.387 09:10:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:25.387 ************************************ 00:04:25.645 09:10:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:25.645 09:10:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:25.645 09:10:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:25.645 09:10:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.645 09:10:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.645 09:10:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 ************************************ 00:04:25.646 START TEST rpc_daemon_integrity 00:04:25.646 ************************************ 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.646 { 00:04:25.646 "name": "Malloc2", 00:04:25.646 "aliases": [ 00:04:25.646 "eb37533c-0d66-4f54-8d03-a3a419515ffa" 00:04:25.646 ], 00:04:25.646 "product_name": "Malloc disk", 00:04:25.646 "block_size": 512, 00:04:25.646 "num_blocks": 16384, 00:04:25.646 "uuid": "eb37533c-0d66-4f54-8d03-a3a419515ffa", 00:04:25.646 "assigned_rate_limits": { 00:04:25.646 "rw_ios_per_sec": 0, 00:04:25.646 "rw_mbytes_per_sec": 0, 00:04:25.646 "r_mbytes_per_sec": 0, 00:04:25.646 "w_mbytes_per_sec": 0 00:04:25.646 }, 00:04:25.646 "claimed": false, 00:04:25.646 "zoned": false, 00:04:25.646 "supported_io_types": { 00:04:25.646 "read": true, 00:04:25.646 "write": true, 00:04:25.646 "unmap": true, 00:04:25.646 "flush": true, 00:04:25.646 "reset": true, 00:04:25.646 "nvme_admin": false, 00:04:25.646 "nvme_io": false, 00:04:25.646 "nvme_io_md": false, 00:04:25.646 "write_zeroes": true, 00:04:25.646 "zcopy": true, 00:04:25.646 "get_zone_info": false, 00:04:25.646 "zone_management": false, 00:04:25.646 "zone_append": false, 00:04:25.646 "compare": false, 00:04:25.646 "compare_and_write": false, 00:04:25.646 "abort": true, 00:04:25.646 "seek_hole": false, 00:04:25.646 "seek_data": false, 00:04:25.646 "copy": true, 00:04:25.646 "nvme_iov_md": false 00:04:25.646 }, 00:04:25.646 "memory_domains": [ 00:04:25.646 { 00:04:25.646 "dma_device_id": "system", 00:04:25.646 "dma_device_type": 1 00:04:25.646 }, 00:04:25.646 { 00:04:25.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.646 "dma_device_type": 2 00:04:25.646 } 00:04:25.646 ], 00:04:25.646 "driver_specific": {} 00:04:25.646 } 00:04:25.646 ]' 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 [2024-10-08 09:10:17.274261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:25.646 [2024-10-08 09:10:17.274308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.646 [2024-10-08 09:10:17.274326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11b8a90 00:04:25.646 [2024-10-08 09:10:17.274336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.646 [2024-10-08 09:10:17.276276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.646 [2024-10-08 09:10:17.276324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.646 Passthru0 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.646 { 00:04:25.646 "name": "Malloc2", 00:04:25.646 "aliases": [ 00:04:25.646 "eb37533c-0d66-4f54-8d03-a3a419515ffa" 00:04:25.646 ], 00:04:25.646 "product_name": "Malloc disk", 00:04:25.646 "block_size": 512, 00:04:25.646 "num_blocks": 16384, 00:04:25.646 "uuid": "eb37533c-0d66-4f54-8d03-a3a419515ffa", 00:04:25.646 "assigned_rate_limits": { 00:04:25.646 "rw_ios_per_sec": 0, 00:04:25.646 "rw_mbytes_per_sec": 0, 00:04:25.646 "r_mbytes_per_sec": 0, 00:04:25.646 "w_mbytes_per_sec": 0 00:04:25.646 }, 00:04:25.646 "claimed": true, 00:04:25.646 "claim_type": "exclusive_write", 00:04:25.646 "zoned": false, 00:04:25.646 "supported_io_types": { 00:04:25.646 "read": true, 00:04:25.646 "write": true, 00:04:25.646 "unmap": true, 00:04:25.646 "flush": true, 00:04:25.646 "reset": true, 00:04:25.646 "nvme_admin": false, 00:04:25.646 "nvme_io": false, 00:04:25.646 "nvme_io_md": false, 00:04:25.646 "write_zeroes": true, 00:04:25.646 "zcopy": true, 00:04:25.646 "get_zone_info": false, 00:04:25.646 "zone_management": false, 00:04:25.646 "zone_append": false, 00:04:25.646 "compare": false, 00:04:25.646 "compare_and_write": false, 00:04:25.646 "abort": true, 00:04:25.646 "seek_hole": false, 00:04:25.646 "seek_data": false, 00:04:25.646 "copy": true, 00:04:25.646 "nvme_iov_md": false 00:04:25.646 }, 00:04:25.646 "memory_domains": [ 00:04:25.646 { 00:04:25.646 "dma_device_id": "system", 00:04:25.646 "dma_device_type": 1 00:04:25.646 }, 00:04:25.646 { 00:04:25.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.646 "dma_device_type": 2 00:04:25.646 } 00:04:25.646 ], 00:04:25.646 "driver_specific": {} 00:04:25.646 }, 00:04:25.646 { 00:04:25.646 "name": "Passthru0", 00:04:25.646 "aliases": [ 00:04:25.646 "d551b0ae-d5d2-5dbb-922c-12d9d0456a83" 00:04:25.646 ], 00:04:25.646 "product_name": "passthru", 00:04:25.646 "block_size": 512, 00:04:25.646 "num_blocks": 16384, 00:04:25.646 "uuid": "d551b0ae-d5d2-5dbb-922c-12d9d0456a83", 00:04:25.646 "assigned_rate_limits": { 00:04:25.646 "rw_ios_per_sec": 0, 00:04:25.646 "rw_mbytes_per_sec": 0, 00:04:25.646 "r_mbytes_per_sec": 0, 00:04:25.646 "w_mbytes_per_sec": 0 00:04:25.646 }, 00:04:25.646 "claimed": false, 00:04:25.646 "zoned": false, 00:04:25.646 "supported_io_types": { 00:04:25.646 "read": true, 00:04:25.646 "write": true, 00:04:25.646 "unmap": true, 00:04:25.646 "flush": true, 00:04:25.646 "reset": true, 00:04:25.646 "nvme_admin": false, 00:04:25.646 "nvme_io": false, 00:04:25.646 "nvme_io_md": false, 00:04:25.646 "write_zeroes": true, 00:04:25.646 "zcopy": true, 00:04:25.646 "get_zone_info": false, 00:04:25.646 "zone_management": false, 00:04:25.646 "zone_append": false, 00:04:25.646 "compare": false, 00:04:25.646 "compare_and_write": false, 00:04:25.646 "abort": true, 00:04:25.646 "seek_hole": false, 00:04:25.646 "seek_data": false, 00:04:25.646 "copy": true, 00:04:25.646 "nvme_iov_md": false 00:04:25.646 }, 00:04:25.646 "memory_domains": [ 00:04:25.646 { 00:04:25.646 "dma_device_id": "system", 00:04:25.646 "dma_device_type": 1 00:04:25.646 }, 00:04:25.646 { 00:04:25.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.646 "dma_device_type": 2 00:04:25.646 } 00:04:25.646 ], 00:04:25.646 "driver_specific": { 00:04:25.646 "passthru": { 00:04:25.646 "name": "Passthru0", 00:04:25.646 "base_bdev_name": "Malloc2" 00:04:25.646 } 00:04:25.646 } 00:04:25.646 } 00:04:25.646 ]' 00:04:25.646 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.907 00:04:25.907 real 0m0.325s 00:04:25.907 user 0m0.213s 00:04:25.907 sys 0m0.046s 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.907 09:10:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.907 ************************************ 00:04:25.907 END TEST rpc_daemon_integrity 00:04:25.907 ************************************ 00:04:25.907 09:10:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:25.907 09:10:17 rpc -- rpc/rpc.sh@84 -- # killprocess 56935 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@950 -- # '[' -z 56935 ']' 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@954 -- # kill -0 56935 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@955 -- # uname 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56935 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.907 killing process with pid 56935 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56935' 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@969 -- # kill 56935 00:04:25.907 09:10:17 rpc -- common/autotest_common.sh@974 -- # wait 56935 00:04:26.476 00:04:26.476 real 0m3.078s 00:04:26.476 user 0m3.980s 00:04:26.476 sys 0m0.734s 00:04:26.476 09:10:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.476 ************************************ 00:04:26.476 END TEST rpc 00:04:26.476 ************************************ 00:04:26.476 09:10:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.476 09:10:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:26.476 09:10:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.476 09:10:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.476 09:10:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.476 ************************************ 00:04:26.476 START TEST skip_rpc 00:04:26.476 ************************************ 00:04:26.476 09:10:17 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:26.476 * Looking for test storage... 00:04:26.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.476 09:10:18 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.476 09:10:18 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.476 09:10:18 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.735 09:10:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.735 --rc genhtml_branch_coverage=1 00:04:26.735 --rc genhtml_function_coverage=1 00:04:26.735 --rc genhtml_legend=1 00:04:26.735 --rc geninfo_all_blocks=1 00:04:26.735 --rc geninfo_unexecuted_blocks=1 00:04:26.735 00:04:26.735 ' 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.735 --rc genhtml_branch_coverage=1 00:04:26.735 --rc genhtml_function_coverage=1 00:04:26.735 --rc genhtml_legend=1 00:04:26.735 --rc geninfo_all_blocks=1 00:04:26.735 --rc geninfo_unexecuted_blocks=1 00:04:26.735 00:04:26.735 ' 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.735 --rc genhtml_branch_coverage=1 00:04:26.735 --rc genhtml_function_coverage=1 00:04:26.735 --rc genhtml_legend=1 00:04:26.735 --rc geninfo_all_blocks=1 00:04:26.735 --rc geninfo_unexecuted_blocks=1 00:04:26.735 00:04:26.735 ' 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.735 --rc genhtml_branch_coverage=1 00:04:26.735 --rc genhtml_function_coverage=1 00:04:26.735 --rc genhtml_legend=1 00:04:26.735 --rc geninfo_all_blocks=1 00:04:26.735 --rc geninfo_unexecuted_blocks=1 00:04:26.735 00:04:26.735 ' 00:04:26.735 09:10:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.735 09:10:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.735 09:10:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.735 09:10:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.735 ************************************ 00:04:26.735 START TEST skip_rpc 00:04:26.735 ************************************ 00:04:26.735 09:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:26.735 09:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57141 00:04:26.735 09:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.735 09:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:26.735 09:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:26.735 [2024-10-08 09:10:18.268819] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:26.735 [2024-10-08 09:10:18.268947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57141 ] 00:04:26.735 [2024-10-08 09:10:18.408666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.993 [2024-10-08 09:10:18.503373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.993 [2024-10-08 09:10:18.577704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57141 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57141 ']' 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57141 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57141 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.263 killing process with pid 57141 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57141' 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57141 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57141 00:04:32.263 00:04:32.263 real 0m5.465s 00:04:32.263 user 0m5.071s 00:04:32.263 sys 0m0.313s 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.263 09:10:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.263 ************************************ 00:04:32.263 END TEST skip_rpc 00:04:32.263 ************************************ 00:04:32.263 09:10:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.263 09:10:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.263 09:10:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.263 09:10:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.263 ************************************ 00:04:32.263 START TEST skip_rpc_with_json 00:04:32.263 ************************************ 00:04:32.263 09:10:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:32.263 09:10:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.263 09:10:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57228 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57228 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57228 ']' 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.264 09:10:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.264 [2024-10-08 09:10:23.774184] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:32.264 [2024-10-08 09:10:23.774274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57228 ] 00:04:32.264 [2024-10-08 09:10:23.918143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.522 [2024-10-08 09:10:24.054624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.522 [2024-10-08 09:10:24.156677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.459 [2024-10-08 09:10:24.822977] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.459 request: 00:04:33.459 { 00:04:33.459 "trtype": "tcp", 00:04:33.459 "method": "nvmf_get_transports", 00:04:33.459 "req_id": 1 00:04:33.459 } 00:04:33.459 Got JSON-RPC error response 00:04:33.459 response: 00:04:33.459 { 00:04:33.459 "code": -19, 00:04:33.459 "message": "No such device" 00:04:33.459 } 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.459 [2024-10-08 09:10:24.835085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.459 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.459 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.459 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.459 { 00:04:33.459 "subsystems": [ 00:04:33.459 { 00:04:33.459 "subsystem": "fsdev", 00:04:33.459 "config": [ 00:04:33.459 { 00:04:33.459 "method": "fsdev_set_opts", 00:04:33.459 "params": { 00:04:33.459 "fsdev_io_pool_size": 65535, 00:04:33.459 "fsdev_io_cache_size": 256 00:04:33.459 } 00:04:33.459 } 00:04:33.459 ] 00:04:33.459 }, 00:04:33.459 { 00:04:33.459 "subsystem": "keyring", 00:04:33.459 "config": [] 00:04:33.459 }, 00:04:33.459 { 00:04:33.459 "subsystem": "iobuf", 00:04:33.459 "config": [ 00:04:33.459 { 00:04:33.459 "method": "iobuf_set_options", 00:04:33.459 "params": { 00:04:33.459 "small_pool_count": 8192, 00:04:33.459 "large_pool_count": 1024, 00:04:33.459 "small_bufsize": 8192, 00:04:33.459 "large_bufsize": 135168 00:04:33.459 } 00:04:33.459 } 00:04:33.459 ] 00:04:33.459 }, 00:04:33.459 { 00:04:33.459 "subsystem": "sock", 00:04:33.459 "config": [ 00:04:33.459 { 00:04:33.459 "method": "sock_set_default_impl", 00:04:33.459 "params": { 00:04:33.459 "impl_name": "uring" 00:04:33.459 } 00:04:33.459 }, 00:04:33.459 { 00:04:33.459 "method": "sock_impl_set_options", 00:04:33.459 "params": { 00:04:33.459 "impl_name": "ssl", 00:04:33.459 "recv_buf_size": 4096, 00:04:33.459 "send_buf_size": 4096, 00:04:33.459 "enable_recv_pipe": true, 00:04:33.459 "enable_quickack": false, 00:04:33.459 "enable_placement_id": 0, 00:04:33.459 "enable_zerocopy_send_server": true, 00:04:33.459 "enable_zerocopy_send_client": false, 00:04:33.459 "zerocopy_threshold": 0, 00:04:33.459 "tls_version": 0, 00:04:33.459 "enable_ktls": false 00:04:33.459 } 00:04:33.459 }, 00:04:33.459 { 00:04:33.459 "method": "sock_impl_set_options", 00:04:33.459 "params": { 00:04:33.459 "impl_name": "posix", 00:04:33.459 "recv_buf_size": 2097152, 00:04:33.459 "send_buf_size": 2097152, 00:04:33.459 "enable_recv_pipe": true, 00:04:33.459 "enable_quickack": false, 00:04:33.459 "enable_placement_id": 0, 00:04:33.459 "enable_zerocopy_send_server": true, 00:04:33.459 "enable_zerocopy_send_client": false, 00:04:33.459 "zerocopy_threshold": 0, 00:04:33.459 "tls_version": 0, 00:04:33.459 "enable_ktls": false 00:04:33.459 } 00:04:33.459 }, 00:04:33.459 { 00:04:33.459 "method": "sock_impl_set_options", 00:04:33.460 "params": { 00:04:33.460 "impl_name": "uring", 00:04:33.460 "recv_buf_size": 2097152, 00:04:33.460 "send_buf_size": 2097152, 00:04:33.460 "enable_recv_pipe": true, 00:04:33.460 "enable_quickack": false, 00:04:33.460 "enable_placement_id": 0, 00:04:33.460 "enable_zerocopy_send_server": false, 00:04:33.460 "enable_zerocopy_send_client": false, 00:04:33.460 "zerocopy_threshold": 0, 00:04:33.460 "tls_version": 0, 00:04:33.460 "enable_ktls": false 00:04:33.460 } 00:04:33.460 } 00:04:33.460 ] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "vmd", 00:04:33.460 "config": [] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "accel", 00:04:33.460 "config": [ 00:04:33.460 { 00:04:33.460 "method": "accel_set_options", 00:04:33.460 "params": { 00:04:33.460 "small_cache_size": 128, 00:04:33.460 "large_cache_size": 16, 00:04:33.460 "task_count": 2048, 00:04:33.460 "sequence_count": 2048, 00:04:33.460 "buf_count": 2048 00:04:33.460 } 00:04:33.460 } 00:04:33.460 ] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "bdev", 00:04:33.460 "config": [ 00:04:33.460 { 00:04:33.460 "method": "bdev_set_options", 00:04:33.460 "params": { 00:04:33.460 "bdev_io_pool_size": 65535, 00:04:33.460 "bdev_io_cache_size": 256, 00:04:33.460 "bdev_auto_examine": true, 00:04:33.460 "iobuf_small_cache_size": 128, 00:04:33.460 "iobuf_large_cache_size": 16 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "bdev_raid_set_options", 00:04:33.460 "params": { 00:04:33.460 "process_window_size_kb": 1024, 00:04:33.460 "process_max_bandwidth_mb_sec": 0 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "bdev_iscsi_set_options", 00:04:33.460 "params": { 00:04:33.460 "timeout_sec": 30 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "bdev_nvme_set_options", 00:04:33.460 "params": { 00:04:33.460 "action_on_timeout": "none", 00:04:33.460 "timeout_us": 0, 00:04:33.460 "timeout_admin_us": 0, 00:04:33.460 "keep_alive_timeout_ms": 10000, 00:04:33.460 "arbitration_burst": 0, 00:04:33.460 "low_priority_weight": 0, 00:04:33.460 "medium_priority_weight": 0, 00:04:33.460 "high_priority_weight": 0, 00:04:33.460 "nvme_adminq_poll_period_us": 10000, 00:04:33.460 "nvme_ioq_poll_period_us": 0, 00:04:33.460 "io_queue_requests": 0, 00:04:33.460 "delay_cmd_submit": true, 00:04:33.460 "transport_retry_count": 4, 00:04:33.460 "bdev_retry_count": 3, 00:04:33.460 "transport_ack_timeout": 0, 00:04:33.460 "ctrlr_loss_timeout_sec": 0, 00:04:33.460 "reconnect_delay_sec": 0, 00:04:33.460 "fast_io_fail_timeout_sec": 0, 00:04:33.460 "disable_auto_failback": false, 00:04:33.460 "generate_uuids": false, 00:04:33.460 "transport_tos": 0, 00:04:33.460 "nvme_error_stat": false, 00:04:33.460 "rdma_srq_size": 0, 00:04:33.460 "io_path_stat": false, 00:04:33.460 "allow_accel_sequence": false, 00:04:33.460 "rdma_max_cq_size": 0, 00:04:33.460 "rdma_cm_event_timeout_ms": 0, 00:04:33.460 "dhchap_digests": [ 00:04:33.460 "sha256", 00:04:33.460 "sha384", 00:04:33.460 "sha512" 00:04:33.460 ], 00:04:33.460 "dhchap_dhgroups": [ 00:04:33.460 "null", 00:04:33.460 "ffdhe2048", 00:04:33.460 "ffdhe3072", 00:04:33.460 "ffdhe4096", 00:04:33.460 "ffdhe6144", 00:04:33.460 "ffdhe8192" 00:04:33.460 ] 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "bdev_nvme_set_hotplug", 00:04:33.460 "params": { 00:04:33.460 "period_us": 100000, 00:04:33.460 "enable": false 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "bdev_wait_for_examine" 00:04:33.460 } 00:04:33.460 ] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "scsi", 00:04:33.460 "config": null 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "scheduler", 00:04:33.460 "config": [ 00:04:33.460 { 00:04:33.460 "method": "framework_set_scheduler", 00:04:33.460 "params": { 00:04:33.460 "name": "static" 00:04:33.460 } 00:04:33.460 } 00:04:33.460 ] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "vhost_scsi", 00:04:33.460 "config": [] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "vhost_blk", 00:04:33.460 "config": [] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "ublk", 00:04:33.460 "config": [] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "nbd", 00:04:33.460 "config": [] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "nvmf", 00:04:33.460 "config": [ 00:04:33.460 { 00:04:33.460 "method": "nvmf_set_config", 00:04:33.460 "params": { 00:04:33.460 "discovery_filter": "match_any", 00:04:33.460 "admin_cmd_passthru": { 00:04:33.460 "identify_ctrlr": false 00:04:33.460 }, 00:04:33.460 "dhchap_digests": [ 00:04:33.460 "sha256", 00:04:33.460 "sha384", 00:04:33.460 "sha512" 00:04:33.460 ], 00:04:33.460 "dhchap_dhgroups": [ 00:04:33.460 "null", 00:04:33.460 "ffdhe2048", 00:04:33.460 "ffdhe3072", 00:04:33.460 "ffdhe4096", 00:04:33.460 "ffdhe6144", 00:04:33.460 "ffdhe8192" 00:04:33.460 ] 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "nvmf_set_max_subsystems", 00:04:33.460 "params": { 00:04:33.460 "max_subsystems": 1024 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "nvmf_set_crdt", 00:04:33.460 "params": { 00:04:33.460 "crdt1": 0, 00:04:33.460 "crdt2": 0, 00:04:33.460 "crdt3": 0 00:04:33.460 } 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "method": "nvmf_create_transport", 00:04:33.460 "params": { 00:04:33.460 "trtype": "TCP", 00:04:33.460 "max_queue_depth": 128, 00:04:33.460 "max_io_qpairs_per_ctrlr": 127, 00:04:33.460 "in_capsule_data_size": 4096, 00:04:33.460 "max_io_size": 131072, 00:04:33.460 "io_unit_size": 131072, 00:04:33.460 "max_aq_depth": 128, 00:04:33.460 "num_shared_buffers": 511, 00:04:33.460 "buf_cache_size": 4294967295, 00:04:33.460 "dif_insert_or_strip": false, 00:04:33.460 "zcopy": false, 00:04:33.460 "c2h_success": true, 00:04:33.460 "sock_priority": 0, 00:04:33.460 "abort_timeout_sec": 1, 00:04:33.460 "ack_timeout": 0, 00:04:33.460 "data_wr_pool_size": 0 00:04:33.460 } 00:04:33.460 } 00:04:33.460 ] 00:04:33.460 }, 00:04:33.460 { 00:04:33.460 "subsystem": "iscsi", 00:04:33.460 "config": [ 00:04:33.460 { 00:04:33.460 "method": "iscsi_set_options", 00:04:33.460 "params": { 00:04:33.460 "node_base": "iqn.2016-06.io.spdk", 00:04:33.460 "max_sessions": 128, 00:04:33.460 "max_connections_per_session": 2, 00:04:33.460 "max_queue_depth": 64, 00:04:33.460 "default_time2wait": 2, 00:04:33.460 "default_time2retain": 20, 00:04:33.460 "first_burst_length": 8192, 00:04:33.460 "immediate_data": true, 00:04:33.460 "allow_duplicated_isid": false, 00:04:33.460 "error_recovery_level": 0, 00:04:33.460 "nop_timeout": 60, 00:04:33.460 "nop_in_interval": 30, 00:04:33.460 "disable_chap": false, 00:04:33.460 "require_chap": false, 00:04:33.460 "mutual_chap": false, 00:04:33.460 "chap_group": 0, 00:04:33.460 "max_large_datain_per_connection": 64, 00:04:33.460 "max_r2t_per_connection": 4, 00:04:33.460 "pdu_pool_size": 36864, 00:04:33.460 "immediate_data_pool_size": 16384, 00:04:33.460 "data_out_pool_size": 2048 00:04:33.460 } 00:04:33.460 } 00:04:33.460 ] 00:04:33.460 } 00:04:33.460 ] 00:04:33.460 } 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57228 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57228 ']' 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57228 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57228 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.460 killing process with pid 57228 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57228' 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57228 00:04:33.460 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57228 00:04:34.028 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57255 00:04:34.028 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.028 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57255 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57255 ']' 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57255 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57255 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.298 killing process with pid 57255 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57255' 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57255 00:04:39.298 09:10:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57255 00:04:39.557 09:10:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.557 09:10:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.557 00:04:39.557 real 0m7.364s 00:04:39.557 user 0m6.910s 00:04:39.557 sys 0m0.898s 00:04:39.557 09:10:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.557 09:10:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.557 ************************************ 00:04:39.557 END TEST skip_rpc_with_json 00:04:39.557 ************************************ 00:04:39.557 09:10:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.557 09:10:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.557 09:10:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.557 09:10:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.558 ************************************ 00:04:39.558 START TEST skip_rpc_with_delay 00:04:39.558 ************************************ 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.558 [2024-10-08 09:10:31.202155] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.558 [2024-10-08 09:10:31.202269] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:39.558 00:04:39.558 real 0m0.087s 00:04:39.558 user 0m0.054s 00:04:39.558 sys 0m0.031s 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.558 09:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.558 ************************************ 00:04:39.558 END TEST skip_rpc_with_delay 00:04:39.558 ************************************ 00:04:39.817 09:10:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.817 09:10:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.817 09:10:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.817 09:10:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.817 09:10:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.817 09:10:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.817 ************************************ 00:04:39.817 START TEST exit_on_failed_rpc_init 00:04:39.817 ************************************ 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57365 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57365 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57365 ']' 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.817 09:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.817 [2024-10-08 09:10:31.336163] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:39.817 [2024-10-08 09:10:31.336249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57365 ] 00:04:39.817 [2024-10-08 09:10:31.470324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.075 [2024-10-08 09:10:31.590820] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.075 [2024-10-08 09:10:31.668642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.012 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.012 [2024-10-08 09:10:32.486105] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:41.012 [2024-10-08 09:10:32.486284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57383 ] 00:04:41.012 [2024-10-08 09:10:32.628497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.272 [2024-10-08 09:10:32.766400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.272 [2024-10-08 09:10:32.766538] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.272 [2024-10-08 09:10:32.766556] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.272 [2024-10-08 09:10:32.766566] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57365 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57365 ']' 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57365 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57365 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.272 killing process with pid 57365 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57365' 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57365 00:04:41.272 09:10:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57365 00:04:41.839 00:04:41.839 real 0m2.113s 00:04:41.839 user 0m2.524s 00:04:41.839 sys 0m0.500s 00:04:41.839 09:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.839 09:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 ************************************ 00:04:41.839 END TEST exit_on_failed_rpc_init 00:04:41.839 ************************************ 00:04:41.839 09:10:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.839 00:04:41.839 real 0m15.442s 00:04:41.839 user 0m14.727s 00:04:41.839 sys 0m1.974s 00:04:41.839 09:10:33 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.839 09:10:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 ************************************ 00:04:41.839 END TEST skip_rpc 00:04:41.839 ************************************ 00:04:41.839 09:10:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.839 09:10:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.839 09:10:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.839 09:10:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.839 ************************************ 00:04:41.839 START TEST rpc_client 00:04:41.839 ************************************ 00:04:41.839 09:10:33 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.099 * Looking for test storage... 00:04:42.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:42.099 09:10:33 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:42.099 09:10:33 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:42.099 09:10:33 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:42.099 09:10:33 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.099 09:10:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:42.099 09:10:33 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.099 09:10:33 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.099 --rc genhtml_branch_coverage=1 00:04:42.099 --rc genhtml_function_coverage=1 00:04:42.099 --rc genhtml_legend=1 00:04:42.099 --rc geninfo_all_blocks=1 00:04:42.099 --rc geninfo_unexecuted_blocks=1 00:04:42.099 00:04:42.099 ' 00:04:42.099 09:10:33 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:42.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.100 --rc genhtml_branch_coverage=1 00:04:42.100 --rc genhtml_function_coverage=1 00:04:42.100 --rc genhtml_legend=1 00:04:42.100 --rc geninfo_all_blocks=1 00:04:42.100 --rc geninfo_unexecuted_blocks=1 00:04:42.100 00:04:42.100 ' 00:04:42.100 09:10:33 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:42.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.100 --rc genhtml_branch_coverage=1 00:04:42.100 --rc genhtml_function_coverage=1 00:04:42.100 --rc genhtml_legend=1 00:04:42.100 --rc geninfo_all_blocks=1 00:04:42.100 --rc geninfo_unexecuted_blocks=1 00:04:42.100 00:04:42.100 ' 00:04:42.100 09:10:33 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:42.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.100 --rc genhtml_branch_coverage=1 00:04:42.100 --rc genhtml_function_coverage=1 00:04:42.100 --rc genhtml_legend=1 00:04:42.100 --rc geninfo_all_blocks=1 00:04:42.100 --rc geninfo_unexecuted_blocks=1 00:04:42.100 00:04:42.100 ' 00:04:42.100 09:10:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:42.100 OK 00:04:42.100 09:10:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.100 00:04:42.100 real 0m0.187s 00:04:42.100 user 0m0.108s 00:04:42.100 sys 0m0.091s 00:04:42.100 09:10:33 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.100 09:10:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.100 ************************************ 00:04:42.100 END TEST rpc_client 00:04:42.100 ************************************ 00:04:42.100 09:10:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.100 09:10:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.100 09:10:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.100 09:10:33 -- common/autotest_common.sh@10 -- # set +x 00:04:42.100 ************************************ 00:04:42.100 START TEST json_config 00:04:42.100 ************************************ 00:04:42.100 09:10:33 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.360 09:10:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.360 09:10:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.360 09:10:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.360 09:10:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.360 09:10:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.360 09:10:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.360 09:10:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.360 09:10:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:42.360 09:10:33 json_config -- scripts/common.sh@345 -- # : 1 00:04:42.360 09:10:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.360 09:10:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.360 09:10:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:42.360 09:10:33 json_config -- scripts/common.sh@353 -- # local d=1 00:04:42.360 09:10:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.360 09:10:33 json_config -- scripts/common.sh@355 -- # echo 1 00:04:42.360 09:10:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.360 09:10:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@353 -- # local d=2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.360 09:10:33 json_config -- scripts/common.sh@355 -- # echo 2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.360 09:10:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.360 09:10:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.360 09:10:33 json_config -- scripts/common.sh@368 -- # return 0 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:42.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.360 --rc genhtml_branch_coverage=1 00:04:42.360 --rc genhtml_function_coverage=1 00:04:42.360 --rc genhtml_legend=1 00:04:42.360 --rc geninfo_all_blocks=1 00:04:42.360 --rc geninfo_unexecuted_blocks=1 00:04:42.360 00:04:42.360 ' 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:42.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.360 --rc genhtml_branch_coverage=1 00:04:42.360 --rc genhtml_function_coverage=1 00:04:42.360 --rc genhtml_legend=1 00:04:42.360 --rc geninfo_all_blocks=1 00:04:42.360 --rc geninfo_unexecuted_blocks=1 00:04:42.360 00:04:42.360 ' 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:42.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.360 --rc genhtml_branch_coverage=1 00:04:42.360 --rc genhtml_function_coverage=1 00:04:42.360 --rc genhtml_legend=1 00:04:42.360 --rc geninfo_all_blocks=1 00:04:42.360 --rc geninfo_unexecuted_blocks=1 00:04:42.360 00:04:42.360 ' 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:42.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.360 --rc genhtml_branch_coverage=1 00:04:42.360 --rc genhtml_function_coverage=1 00:04:42.360 --rc genhtml_legend=1 00:04:42.360 --rc geninfo_all_blocks=1 00:04:42.360 --rc geninfo_unexecuted_blocks=1 00:04:42.360 00:04:42.360 ' 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.360 09:10:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.360 09:10:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.360 09:10:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.360 09:10:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.360 09:10:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.360 09:10:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.360 09:10:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.360 09:10:33 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.360 09:10:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@51 -- # : 0 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.360 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.360 09:10:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.360 INFO: JSON configuration test init 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.360 09:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.360 09:10:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:42.360 09:10:33 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.361 09:10:33 json_config -- json_config/common.sh@10 -- # shift 00:04:42.361 09:10:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.361 09:10:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.361 09:10:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.361 09:10:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.361 09:10:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.361 09:10:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57528 00:04:42.361 09:10:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.361 09:10:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:42.361 Waiting for target to run... 00:04:42.361 09:10:33 json_config -- json_config/common.sh@25 -- # waitforlisten 57528 /var/tmp/spdk_tgt.sock 00:04:42.361 09:10:33 json_config -- common/autotest_common.sh@831 -- # '[' -z 57528 ']' 00:04:42.361 09:10:33 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.361 09:10:33 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.361 09:10:33 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.361 09:10:33 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.361 09:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.361 [2024-10-08 09:10:34.029515] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:42.361 [2024-10-08 09:10:34.029637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57528 ] 00:04:42.928 [2024-10-08 09:10:34.583445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.186 [2024-10-08 09:10:34.677257] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.444 00:04:43.444 09:10:35 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.444 09:10:35 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:43.444 09:10:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.444 09:10:35 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:43.444 09:10:35 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:43.444 09:10:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.444 09:10:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.444 09:10:35 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:43.444 09:10:35 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:43.445 09:10:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.445 09:10:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.445 09:10:35 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.445 09:10:35 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:43.445 09:10:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:44.012 [2024-10-08 09:10:35.389027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:44.012 09:10:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.012 09:10:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:44.012 09:10:35 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:44.012 09:10:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@54 -- # sort 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:44.271 09:10:35 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:44.271 09:10:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.271 09:10:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:44.529 09:10:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.529 09:10:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:44.529 09:10:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.529 09:10:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.787 MallocForNvmf0 00:04:44.787 09:10:36 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.787 09:10:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.046 MallocForNvmf1 00:04:45.046 09:10:36 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.046 09:10:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.304 [2024-10-08 09:10:36.871062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.304 09:10:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.304 09:10:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.563 09:10:37 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.563 09:10:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.821 09:10:37 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.821 09:10:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.079 09:10:37 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.079 09:10:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.340 [2024-10-08 09:10:37.863603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:46.340 09:10:37 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:46.340 09:10:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.340 09:10:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.340 09:10:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:46.340 09:10:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.340 09:10:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.340 09:10:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:46.340 09:10:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.340 09:10:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.598 MallocBdevForConfigChangeCheck 00:04:46.598 09:10:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:46.598 09:10:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.598 09:10:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.867 09:10:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:46.867 09:10:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.126 INFO: shutting down applications... 00:04:47.126 09:10:38 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:47.126 09:10:38 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:47.126 09:10:38 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:47.126 09:10:38 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:47.126 09:10:38 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:47.385 Calling clear_iscsi_subsystem 00:04:47.385 Calling clear_nvmf_subsystem 00:04:47.385 Calling clear_nbd_subsystem 00:04:47.385 Calling clear_ublk_subsystem 00:04:47.385 Calling clear_vhost_blk_subsystem 00:04:47.385 Calling clear_vhost_scsi_subsystem 00:04:47.385 Calling clear_bdev_subsystem 00:04:47.385 09:10:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:47.385 09:10:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:47.385 09:10:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:47.385 09:10:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.385 09:10:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:47.385 09:10:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:47.952 09:10:39 json_config -- json_config/json_config.sh@352 -- # break 00:04:47.952 09:10:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:47.952 09:10:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:47.952 09:10:39 json_config -- json_config/common.sh@31 -- # local app=target 00:04:47.952 09:10:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.952 09:10:39 json_config -- json_config/common.sh@35 -- # [[ -n 57528 ]] 00:04:47.952 09:10:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57528 00:04:47.952 09:10:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.952 09:10:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.952 09:10:39 json_config -- json_config/common.sh@41 -- # kill -0 57528 00:04:47.952 09:10:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.519 09:10:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.519 09:10:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.519 09:10:39 json_config -- json_config/common.sh@41 -- # kill -0 57528 00:04:48.519 09:10:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.519 09:10:39 json_config -- json_config/common.sh@43 -- # break 00:04:48.519 09:10:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.519 SPDK target shutdown done 00:04:48.519 09:10:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.519 INFO: relaunching applications... 00:04:48.519 09:10:39 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:48.519 09:10:39 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.519 09:10:39 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.519 09:10:39 json_config -- json_config/common.sh@10 -- # shift 00:04:48.519 09:10:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.519 09:10:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.519 09:10:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.519 09:10:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.519 09:10:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.519 09:10:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57723 00:04:48.519 Waiting for target to run... 00:04:48.519 09:10:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.519 09:10:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.519 09:10:39 json_config -- json_config/common.sh@25 -- # waitforlisten 57723 /var/tmp/spdk_tgt.sock 00:04:48.519 09:10:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 57723 ']' 00:04:48.519 09:10:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.519 09:10:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.519 09:10:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.519 09:10:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.519 09:10:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.519 [2024-10-08 09:10:40.012996] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:48.519 [2024-10-08 09:10:40.013276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57723 ] 00:04:48.778 [2024-10-08 09:10:40.426250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.036 [2024-10-08 09:10:40.501601] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.036 [2024-10-08 09:10:40.637160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.295 [2024-10-08 09:10:40.851239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.295 [2024-10-08 09:10:40.883314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:49.554 00:04:49.554 INFO: Checking if target configuration is the same... 00:04:49.554 09:10:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.554 09:10:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:49.554 09:10:40 json_config -- json_config/common.sh@26 -- # echo '' 00:04:49.554 09:10:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:49.554 09:10:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:49.554 09:10:40 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.554 09:10:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:49.554 09:10:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.554 + '[' 2 -ne 2 ']' 00:04:49.554 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:49.554 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:49.554 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:49.554 +++ basename /dev/fd/62 00:04:49.554 ++ mktemp /tmp/62.XXX 00:04:49.554 + tmp_file_1=/tmp/62.Xsr 00:04:49.554 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.554 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:49.554 + tmp_file_2=/tmp/spdk_tgt_config.json.Il1 00:04:49.554 + ret=0 00:04:49.554 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.812 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.812 + diff -u /tmp/62.Xsr /tmp/spdk_tgt_config.json.Il1 00:04:49.812 INFO: JSON config files are the same 00:04:49.812 + echo 'INFO: JSON config files are the same' 00:04:49.812 + rm /tmp/62.Xsr /tmp/spdk_tgt_config.json.Il1 00:04:49.812 + exit 0 00:04:49.812 INFO: changing configuration and checking if this can be detected... 00:04:49.812 09:10:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:49.812 09:10:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:49.812 09:10:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.812 09:10:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.071 09:10:41 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.071 09:10:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:50.071 09:10:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.071 + '[' 2 -ne 2 ']' 00:04:50.071 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:50.071 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:50.071 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:50.071 +++ basename /dev/fd/62 00:04:50.071 ++ mktemp /tmp/62.XXX 00:04:50.071 + tmp_file_1=/tmp/62.Qop 00:04:50.071 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.071 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.071 + tmp_file_2=/tmp/spdk_tgt_config.json.uPD 00:04:50.071 + ret=0 00:04:50.071 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.637 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.638 + diff -u /tmp/62.Qop /tmp/spdk_tgt_config.json.uPD 00:04:50.638 + ret=1 00:04:50.638 + echo '=== Start of file: /tmp/62.Qop ===' 00:04:50.638 + cat /tmp/62.Qop 00:04:50.638 + echo '=== End of file: /tmp/62.Qop ===' 00:04:50.638 + echo '' 00:04:50.638 + echo '=== Start of file: /tmp/spdk_tgt_config.json.uPD ===' 00:04:50.638 + cat /tmp/spdk_tgt_config.json.uPD 00:04:50.638 + echo '=== End of file: /tmp/spdk_tgt_config.json.uPD ===' 00:04:50.638 + echo '' 00:04:50.638 + rm /tmp/62.Qop /tmp/spdk_tgt_config.json.uPD 00:04:50.638 + exit 1 00:04:50.638 INFO: configuration change detected. 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 57723 ]] 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.638 09:10:42 json_config -- json_config/json_config.sh@330 -- # killprocess 57723 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@950 -- # '[' -z 57723 ']' 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@954 -- # kill -0 57723 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@955 -- # uname 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.638 09:10:42 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57723 00:04:50.896 killing process with pid 57723 00:04:50.896 09:10:42 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.896 09:10:42 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.896 09:10:42 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57723' 00:04:50.896 09:10:42 json_config -- common/autotest_common.sh@969 -- # kill 57723 00:04:50.896 09:10:42 json_config -- common/autotest_common.sh@974 -- # wait 57723 00:04:51.155 09:10:42 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:51.155 09:10:42 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:51.155 09:10:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:51.155 09:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.155 INFO: Success 00:04:51.155 09:10:42 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:51.155 09:10:42 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:51.155 ************************************ 00:04:51.155 END TEST json_config 00:04:51.155 ************************************ 00:04:51.155 00:04:51.155 real 0m8.884s 00:04:51.155 user 0m12.614s 00:04:51.155 sys 0m1.954s 00:04:51.155 09:10:42 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.155 09:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.155 09:10:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.155 09:10:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.155 09:10:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.155 09:10:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.155 ************************************ 00:04:51.155 START TEST json_config_extra_key 00:04:51.155 ************************************ 00:04:51.155 09:10:42 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.155 09:10:42 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:51.155 09:10:42 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:51.155 09:10:42 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:51.155 09:10:42 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:51.155 09:10:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.156 09:10:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:51.156 09:10:42 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.156 09:10:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.156 --rc genhtml_branch_coverage=1 00:04:51.156 --rc genhtml_function_coverage=1 00:04:51.156 --rc genhtml_legend=1 00:04:51.156 --rc geninfo_all_blocks=1 00:04:51.156 --rc geninfo_unexecuted_blocks=1 00:04:51.156 00:04:51.156 ' 00:04:51.156 09:10:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.156 --rc genhtml_branch_coverage=1 00:04:51.156 --rc genhtml_function_coverage=1 00:04:51.156 --rc genhtml_legend=1 00:04:51.156 --rc geninfo_all_blocks=1 00:04:51.156 --rc geninfo_unexecuted_blocks=1 00:04:51.156 00:04:51.156 ' 00:04:51.156 09:10:42 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.156 --rc genhtml_branch_coverage=1 00:04:51.156 --rc genhtml_function_coverage=1 00:04:51.156 --rc genhtml_legend=1 00:04:51.156 --rc geninfo_all_blocks=1 00:04:51.156 --rc geninfo_unexecuted_blocks=1 00:04:51.156 00:04:51.156 ' 00:04:51.156 09:10:42 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.156 --rc genhtml_branch_coverage=1 00:04:51.156 --rc genhtml_function_coverage=1 00:04:51.156 --rc genhtml_legend=1 00:04:51.156 --rc geninfo_all_blocks=1 00:04:51.156 --rc geninfo_unexecuted_blocks=1 00:04:51.156 00:04:51.156 ' 00:04:51.156 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.415 09:10:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.415 09:10:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.415 09:10:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.415 09:10:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.416 09:10:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.416 09:10:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.416 09:10:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.416 09:10:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.416 09:10:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.416 09:10:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.416 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.416 09:10:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.416 INFO: launching applications... 00:04:51.416 Waiting for target to run... 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.416 09:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57874 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57874 /var/tmp/spdk_tgt.sock 00:04:51.416 09:10:42 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57874 ']' 00:04:51.416 09:10:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.416 09:10:42 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.416 09:10:42 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.416 09:10:42 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.416 09:10:42 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.416 09:10:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.416 [2024-10-08 09:10:42.935544] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:51.416 [2024-10-08 09:10:42.935888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57874 ] 00:04:51.985 [2024-10-08 09:10:43.358007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.985 [2024-10-08 09:10:43.447532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.985 [2024-10-08 09:10:43.479262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.261 09:10:43 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.261 09:10:43 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:52.261 00:04:52.261 09:10:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:52.261 INFO: shutting down applications... 00:04:52.261 09:10:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57874 ]] 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57874 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57874 00:04:52.261 09:10:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.843 09:10:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.843 09:10:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.843 09:10:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57874 00:04:52.843 09:10:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.843 09:10:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:52.843 09:10:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.843 SPDK target shutdown done 00:04:52.843 Success 00:04:52.843 09:10:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.843 09:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:52.844 00:04:52.844 real 0m1.723s 00:04:52.844 user 0m1.616s 00:04:52.844 sys 0m0.443s 00:04:52.844 ************************************ 00:04:52.844 END TEST json_config_extra_key 00:04:52.844 ************************************ 00:04:52.844 09:10:44 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.844 09:10:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.844 09:10:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.844 09:10:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.844 09:10:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.844 09:10:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.844 ************************************ 00:04:52.844 START TEST alias_rpc 00:04:52.844 ************************************ 00:04:52.844 09:10:44 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.844 * Looking for test storage... 00:04:53.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.103 09:10:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.103 --rc genhtml_branch_coverage=1 00:04:53.103 --rc genhtml_function_coverage=1 00:04:53.103 --rc genhtml_legend=1 00:04:53.103 --rc geninfo_all_blocks=1 00:04:53.103 --rc geninfo_unexecuted_blocks=1 00:04:53.103 00:04:53.103 ' 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.103 --rc genhtml_branch_coverage=1 00:04:53.103 --rc genhtml_function_coverage=1 00:04:53.103 --rc genhtml_legend=1 00:04:53.103 --rc geninfo_all_blocks=1 00:04:53.103 --rc geninfo_unexecuted_blocks=1 00:04:53.103 00:04:53.103 ' 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.103 --rc genhtml_branch_coverage=1 00:04:53.103 --rc genhtml_function_coverage=1 00:04:53.103 --rc genhtml_legend=1 00:04:53.103 --rc geninfo_all_blocks=1 00:04:53.103 --rc geninfo_unexecuted_blocks=1 00:04:53.103 00:04:53.103 ' 00:04:53.103 09:10:44 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.104 --rc genhtml_branch_coverage=1 00:04:53.104 --rc genhtml_function_coverage=1 00:04:53.104 --rc genhtml_legend=1 00:04:53.104 --rc geninfo_all_blocks=1 00:04:53.104 --rc geninfo_unexecuted_blocks=1 00:04:53.104 00:04:53.104 ' 00:04:53.104 09:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.104 09:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57950 00:04:53.104 09:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.104 09:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57950 00:04:53.104 09:10:44 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57950 ']' 00:04:53.104 09:10:44 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.104 09:10:44 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.104 09:10:44 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.104 09:10:44 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.104 09:10:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.104 [2024-10-08 09:10:44.706263] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:53.104 [2024-10-08 09:10:44.706557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57950 ] 00:04:53.363 [2024-10-08 09:10:44.846819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.363 [2024-10-08 09:10:44.934517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.363 [2024-10-08 09:10:45.006274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:54.297 09:10:45 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.297 09:10:45 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:54.297 09:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:54.556 09:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57950 00:04:54.556 09:10:45 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57950 ']' 00:04:54.556 09:10:45 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57950 00:04:54.556 09:10:45 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:54.556 09:10:45 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.556 09:10:45 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57950 00:04:54.556 killing process with pid 57950 00:04:54.556 09:10:46 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.556 09:10:46 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.556 09:10:46 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57950' 00:04:54.556 09:10:46 alias_rpc -- common/autotest_common.sh@969 -- # kill 57950 00:04:54.556 09:10:46 alias_rpc -- common/autotest_common.sh@974 -- # wait 57950 00:04:54.814 ************************************ 00:04:54.815 END TEST alias_rpc 00:04:54.815 ************************************ 00:04:54.815 00:04:54.815 real 0m1.970s 00:04:54.815 user 0m2.246s 00:04:54.815 sys 0m0.443s 00:04:54.815 09:10:46 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.815 09:10:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.815 09:10:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:54.815 09:10:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.815 09:10:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.815 09:10:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.815 09:10:46 -- common/autotest_common.sh@10 -- # set +x 00:04:54.815 ************************************ 00:04:54.815 START TEST spdkcli_tcp 00:04:54.815 ************************************ 00:04:54.815 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:55.074 * Looking for test storage... 00:04:55.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.074 09:10:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.074 --rc genhtml_branch_coverage=1 00:04:55.074 --rc genhtml_function_coverage=1 00:04:55.074 --rc genhtml_legend=1 00:04:55.074 --rc geninfo_all_blocks=1 00:04:55.074 --rc geninfo_unexecuted_blocks=1 00:04:55.074 00:04:55.074 ' 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.074 --rc genhtml_branch_coverage=1 00:04:55.074 --rc genhtml_function_coverage=1 00:04:55.074 --rc genhtml_legend=1 00:04:55.074 --rc geninfo_all_blocks=1 00:04:55.074 --rc geninfo_unexecuted_blocks=1 00:04:55.074 00:04:55.074 ' 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.074 --rc genhtml_branch_coverage=1 00:04:55.074 --rc genhtml_function_coverage=1 00:04:55.074 --rc genhtml_legend=1 00:04:55.074 --rc geninfo_all_blocks=1 00:04:55.074 --rc geninfo_unexecuted_blocks=1 00:04:55.074 00:04:55.074 ' 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.074 --rc genhtml_branch_coverage=1 00:04:55.074 --rc genhtml_function_coverage=1 00:04:55.074 --rc genhtml_legend=1 00:04:55.074 --rc geninfo_all_blocks=1 00:04:55.074 --rc geninfo_unexecuted_blocks=1 00:04:55.074 00:04:55.074 ' 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58034 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:55.074 09:10:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58034 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58034 ']' 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.074 09:10:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.074 [2024-10-08 09:10:46.731593] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:55.074 [2024-10-08 09:10:46.731927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58034 ] 00:04:55.332 [2024-10-08 09:10:46.871448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.332 [2024-10-08 09:10:46.957592] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.332 [2024-10-08 09:10:46.957617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.590 [2024-10-08 09:10:47.024611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.155 09:10:47 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.155 09:10:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:56.155 09:10:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58051 00:04:56.155 09:10:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.155 09:10:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.414 [ 00:04:56.414 "bdev_malloc_delete", 00:04:56.414 "bdev_malloc_create", 00:04:56.414 "bdev_null_resize", 00:04:56.414 "bdev_null_delete", 00:04:56.414 "bdev_null_create", 00:04:56.414 "bdev_nvme_cuse_unregister", 00:04:56.414 "bdev_nvme_cuse_register", 00:04:56.414 "bdev_opal_new_user", 00:04:56.414 "bdev_opal_set_lock_state", 00:04:56.414 "bdev_opal_delete", 00:04:56.414 "bdev_opal_get_info", 00:04:56.414 "bdev_opal_create", 00:04:56.414 "bdev_nvme_opal_revert", 00:04:56.414 "bdev_nvme_opal_init", 00:04:56.414 "bdev_nvme_send_cmd", 00:04:56.414 "bdev_nvme_set_keys", 00:04:56.414 "bdev_nvme_get_path_iostat", 00:04:56.414 "bdev_nvme_get_mdns_discovery_info", 00:04:56.414 "bdev_nvme_stop_mdns_discovery", 00:04:56.414 "bdev_nvme_start_mdns_discovery", 00:04:56.414 "bdev_nvme_set_multipath_policy", 00:04:56.414 "bdev_nvme_set_preferred_path", 00:04:56.414 "bdev_nvme_get_io_paths", 00:04:56.414 "bdev_nvme_remove_error_injection", 00:04:56.414 "bdev_nvme_add_error_injection", 00:04:56.414 "bdev_nvme_get_discovery_info", 00:04:56.414 "bdev_nvme_stop_discovery", 00:04:56.414 "bdev_nvme_start_discovery", 00:04:56.414 "bdev_nvme_get_controller_health_info", 00:04:56.414 "bdev_nvme_disable_controller", 00:04:56.414 "bdev_nvme_enable_controller", 00:04:56.414 "bdev_nvme_reset_controller", 00:04:56.414 "bdev_nvme_get_transport_statistics", 00:04:56.414 "bdev_nvme_apply_firmware", 00:04:56.414 "bdev_nvme_detach_controller", 00:04:56.414 "bdev_nvme_get_controllers", 00:04:56.414 "bdev_nvme_attach_controller", 00:04:56.414 "bdev_nvme_set_hotplug", 00:04:56.414 "bdev_nvme_set_options", 00:04:56.414 "bdev_passthru_delete", 00:04:56.414 "bdev_passthru_create", 00:04:56.414 "bdev_lvol_set_parent_bdev", 00:04:56.414 "bdev_lvol_set_parent", 00:04:56.414 "bdev_lvol_check_shallow_copy", 00:04:56.414 "bdev_lvol_start_shallow_copy", 00:04:56.414 "bdev_lvol_grow_lvstore", 00:04:56.414 "bdev_lvol_get_lvols", 00:04:56.414 "bdev_lvol_get_lvstores", 00:04:56.414 "bdev_lvol_delete", 00:04:56.414 "bdev_lvol_set_read_only", 00:04:56.414 "bdev_lvol_resize", 00:04:56.414 "bdev_lvol_decouple_parent", 00:04:56.414 "bdev_lvol_inflate", 00:04:56.414 "bdev_lvol_rename", 00:04:56.414 "bdev_lvol_clone_bdev", 00:04:56.414 "bdev_lvol_clone", 00:04:56.414 "bdev_lvol_snapshot", 00:04:56.414 "bdev_lvol_create", 00:04:56.414 "bdev_lvol_delete_lvstore", 00:04:56.414 "bdev_lvol_rename_lvstore", 00:04:56.414 "bdev_lvol_create_lvstore", 00:04:56.414 "bdev_raid_set_options", 00:04:56.414 "bdev_raid_remove_base_bdev", 00:04:56.414 "bdev_raid_add_base_bdev", 00:04:56.414 "bdev_raid_delete", 00:04:56.414 "bdev_raid_create", 00:04:56.414 "bdev_raid_get_bdevs", 00:04:56.414 "bdev_error_inject_error", 00:04:56.414 "bdev_error_delete", 00:04:56.414 "bdev_error_create", 00:04:56.414 "bdev_split_delete", 00:04:56.414 "bdev_split_create", 00:04:56.414 "bdev_delay_delete", 00:04:56.414 "bdev_delay_create", 00:04:56.414 "bdev_delay_update_latency", 00:04:56.414 "bdev_zone_block_delete", 00:04:56.414 "bdev_zone_block_create", 00:04:56.414 "blobfs_create", 00:04:56.414 "blobfs_detect", 00:04:56.414 "blobfs_set_cache_size", 00:04:56.414 "bdev_aio_delete", 00:04:56.414 "bdev_aio_rescan", 00:04:56.414 "bdev_aio_create", 00:04:56.414 "bdev_ftl_set_property", 00:04:56.414 "bdev_ftl_get_properties", 00:04:56.414 "bdev_ftl_get_stats", 00:04:56.414 "bdev_ftl_unmap", 00:04:56.414 "bdev_ftl_unload", 00:04:56.414 "bdev_ftl_delete", 00:04:56.414 "bdev_ftl_load", 00:04:56.414 "bdev_ftl_create", 00:04:56.414 "bdev_virtio_attach_controller", 00:04:56.414 "bdev_virtio_scsi_get_devices", 00:04:56.414 "bdev_virtio_detach_controller", 00:04:56.414 "bdev_virtio_blk_set_hotplug", 00:04:56.414 "bdev_iscsi_delete", 00:04:56.414 "bdev_iscsi_create", 00:04:56.414 "bdev_iscsi_set_options", 00:04:56.414 "bdev_uring_delete", 00:04:56.414 "bdev_uring_rescan", 00:04:56.414 "bdev_uring_create", 00:04:56.414 "accel_error_inject_error", 00:04:56.414 "ioat_scan_accel_module", 00:04:56.414 "dsa_scan_accel_module", 00:04:56.414 "iaa_scan_accel_module", 00:04:56.414 "keyring_file_remove_key", 00:04:56.414 "keyring_file_add_key", 00:04:56.414 "keyring_linux_set_options", 00:04:56.414 "fsdev_aio_delete", 00:04:56.414 "fsdev_aio_create", 00:04:56.414 "iscsi_get_histogram", 00:04:56.414 "iscsi_enable_histogram", 00:04:56.414 "iscsi_set_options", 00:04:56.414 "iscsi_get_auth_groups", 00:04:56.414 "iscsi_auth_group_remove_secret", 00:04:56.414 "iscsi_auth_group_add_secret", 00:04:56.414 "iscsi_delete_auth_group", 00:04:56.414 "iscsi_create_auth_group", 00:04:56.414 "iscsi_set_discovery_auth", 00:04:56.414 "iscsi_get_options", 00:04:56.414 "iscsi_target_node_request_logout", 00:04:56.414 "iscsi_target_node_set_redirect", 00:04:56.414 "iscsi_target_node_set_auth", 00:04:56.414 "iscsi_target_node_add_lun", 00:04:56.414 "iscsi_get_stats", 00:04:56.414 "iscsi_get_connections", 00:04:56.414 "iscsi_portal_group_set_auth", 00:04:56.414 "iscsi_start_portal_group", 00:04:56.414 "iscsi_delete_portal_group", 00:04:56.414 "iscsi_create_portal_group", 00:04:56.414 "iscsi_get_portal_groups", 00:04:56.414 "iscsi_delete_target_node", 00:04:56.414 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.414 "iscsi_target_node_add_pg_ig_maps", 00:04:56.414 "iscsi_create_target_node", 00:04:56.414 "iscsi_get_target_nodes", 00:04:56.414 "iscsi_delete_initiator_group", 00:04:56.414 "iscsi_initiator_group_remove_initiators", 00:04:56.414 "iscsi_initiator_group_add_initiators", 00:04:56.414 "iscsi_create_initiator_group", 00:04:56.414 "iscsi_get_initiator_groups", 00:04:56.414 "nvmf_set_crdt", 00:04:56.414 "nvmf_set_config", 00:04:56.414 "nvmf_set_max_subsystems", 00:04:56.414 "nvmf_stop_mdns_prr", 00:04:56.414 "nvmf_publish_mdns_prr", 00:04:56.414 "nvmf_subsystem_get_listeners", 00:04:56.414 "nvmf_subsystem_get_qpairs", 00:04:56.414 "nvmf_subsystem_get_controllers", 00:04:56.414 "nvmf_get_stats", 00:04:56.414 "nvmf_get_transports", 00:04:56.414 "nvmf_create_transport", 00:04:56.414 "nvmf_get_targets", 00:04:56.414 "nvmf_delete_target", 00:04:56.414 "nvmf_create_target", 00:04:56.414 "nvmf_subsystem_allow_any_host", 00:04:56.414 "nvmf_subsystem_set_keys", 00:04:56.414 "nvmf_subsystem_remove_host", 00:04:56.414 "nvmf_subsystem_add_host", 00:04:56.414 "nvmf_ns_remove_host", 00:04:56.414 "nvmf_ns_add_host", 00:04:56.414 "nvmf_subsystem_remove_ns", 00:04:56.414 "nvmf_subsystem_set_ns_ana_group", 00:04:56.414 "nvmf_subsystem_add_ns", 00:04:56.414 "nvmf_subsystem_listener_set_ana_state", 00:04:56.414 "nvmf_discovery_get_referrals", 00:04:56.414 "nvmf_discovery_remove_referral", 00:04:56.414 "nvmf_discovery_add_referral", 00:04:56.414 "nvmf_subsystem_remove_listener", 00:04:56.415 "nvmf_subsystem_add_listener", 00:04:56.415 "nvmf_delete_subsystem", 00:04:56.415 "nvmf_create_subsystem", 00:04:56.415 "nvmf_get_subsystems", 00:04:56.415 "env_dpdk_get_mem_stats", 00:04:56.415 "nbd_get_disks", 00:04:56.415 "nbd_stop_disk", 00:04:56.415 "nbd_start_disk", 00:04:56.415 "ublk_recover_disk", 00:04:56.415 "ublk_get_disks", 00:04:56.415 "ublk_stop_disk", 00:04:56.415 "ublk_start_disk", 00:04:56.415 "ublk_destroy_target", 00:04:56.415 "ublk_create_target", 00:04:56.415 "virtio_blk_create_transport", 00:04:56.415 "virtio_blk_get_transports", 00:04:56.415 "vhost_controller_set_coalescing", 00:04:56.415 "vhost_get_controllers", 00:04:56.415 "vhost_delete_controller", 00:04:56.415 "vhost_create_blk_controller", 00:04:56.415 "vhost_scsi_controller_remove_target", 00:04:56.415 "vhost_scsi_controller_add_target", 00:04:56.415 "vhost_start_scsi_controller", 00:04:56.415 "vhost_create_scsi_controller", 00:04:56.415 "thread_set_cpumask", 00:04:56.415 "scheduler_set_options", 00:04:56.415 "framework_get_governor", 00:04:56.415 "framework_get_scheduler", 00:04:56.415 "framework_set_scheduler", 00:04:56.415 "framework_get_reactors", 00:04:56.415 "thread_get_io_channels", 00:04:56.415 "thread_get_pollers", 00:04:56.415 "thread_get_stats", 00:04:56.415 "framework_monitor_context_switch", 00:04:56.415 "spdk_kill_instance", 00:04:56.415 "log_enable_timestamps", 00:04:56.415 "log_get_flags", 00:04:56.415 "log_clear_flag", 00:04:56.415 "log_set_flag", 00:04:56.415 "log_get_level", 00:04:56.415 "log_set_level", 00:04:56.415 "log_get_print_level", 00:04:56.415 "log_set_print_level", 00:04:56.415 "framework_enable_cpumask_locks", 00:04:56.415 "framework_disable_cpumask_locks", 00:04:56.415 "framework_wait_init", 00:04:56.415 "framework_start_init", 00:04:56.415 "scsi_get_devices", 00:04:56.415 "bdev_get_histogram", 00:04:56.415 "bdev_enable_histogram", 00:04:56.415 "bdev_set_qos_limit", 00:04:56.415 "bdev_set_qd_sampling_period", 00:04:56.415 "bdev_get_bdevs", 00:04:56.415 "bdev_reset_iostat", 00:04:56.415 "bdev_get_iostat", 00:04:56.415 "bdev_examine", 00:04:56.415 "bdev_wait_for_examine", 00:04:56.415 "bdev_set_options", 00:04:56.415 "accel_get_stats", 00:04:56.415 "accel_set_options", 00:04:56.415 "accel_set_driver", 00:04:56.415 "accel_crypto_key_destroy", 00:04:56.415 "accel_crypto_keys_get", 00:04:56.415 "accel_crypto_key_create", 00:04:56.415 "accel_assign_opc", 00:04:56.415 "accel_get_module_info", 00:04:56.415 "accel_get_opc_assignments", 00:04:56.415 "vmd_rescan", 00:04:56.415 "vmd_remove_device", 00:04:56.415 "vmd_enable", 00:04:56.415 "sock_get_default_impl", 00:04:56.415 "sock_set_default_impl", 00:04:56.415 "sock_impl_set_options", 00:04:56.415 "sock_impl_get_options", 00:04:56.415 "iobuf_get_stats", 00:04:56.415 "iobuf_set_options", 00:04:56.415 "keyring_get_keys", 00:04:56.415 "framework_get_pci_devices", 00:04:56.415 "framework_get_config", 00:04:56.415 "framework_get_subsystems", 00:04:56.415 "fsdev_set_opts", 00:04:56.415 "fsdev_get_opts", 00:04:56.415 "trace_get_info", 00:04:56.415 "trace_get_tpoint_group_mask", 00:04:56.415 "trace_disable_tpoint_group", 00:04:56.415 "trace_enable_tpoint_group", 00:04:56.415 "trace_clear_tpoint_mask", 00:04:56.415 "trace_set_tpoint_mask", 00:04:56.415 "notify_get_notifications", 00:04:56.415 "notify_get_types", 00:04:56.415 "spdk_get_version", 00:04:56.415 "rpc_get_methods" 00:04:56.415 ] 00:04:56.415 09:10:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.415 09:10:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.415 09:10:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.415 09:10:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.415 09:10:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58034 00:04:56.415 09:10:47 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58034 ']' 00:04:56.415 09:10:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58034 00:04:56.415 09:10:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:56.415 09:10:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.415 09:10:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58034 00:04:56.415 killing process with pid 58034 00:04:56.415 09:10:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.415 09:10:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.415 09:10:48 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58034' 00:04:56.415 09:10:48 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58034 00:04:56.415 09:10:48 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58034 00:04:56.982 ************************************ 00:04:56.982 END TEST spdkcli_tcp 00:04:56.982 ************************************ 00:04:56.982 00:04:56.982 real 0m1.947s 00:04:56.982 user 0m3.484s 00:04:56.982 sys 0m0.546s 00:04:56.982 09:10:48 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.982 09:10:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.982 09:10:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.982 09:10:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.982 09:10:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.982 09:10:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.982 ************************************ 00:04:56.982 START TEST dpdk_mem_utility 00:04:56.982 ************************************ 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.982 * Looking for test storage... 00:04:56.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.982 09:10:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.982 --rc genhtml_branch_coverage=1 00:04:56.982 --rc genhtml_function_coverage=1 00:04:56.982 --rc genhtml_legend=1 00:04:56.982 --rc geninfo_all_blocks=1 00:04:56.982 --rc geninfo_unexecuted_blocks=1 00:04:56.982 00:04:56.982 ' 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.982 --rc genhtml_branch_coverage=1 00:04:56.982 --rc genhtml_function_coverage=1 00:04:56.982 --rc genhtml_legend=1 00:04:56.982 --rc geninfo_all_blocks=1 00:04:56.982 --rc geninfo_unexecuted_blocks=1 00:04:56.982 00:04:56.982 ' 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.982 --rc genhtml_branch_coverage=1 00:04:56.982 --rc genhtml_function_coverage=1 00:04:56.982 --rc genhtml_legend=1 00:04:56.982 --rc geninfo_all_blocks=1 00:04:56.982 --rc geninfo_unexecuted_blocks=1 00:04:56.982 00:04:56.982 ' 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.982 --rc genhtml_branch_coverage=1 00:04:56.982 --rc genhtml_function_coverage=1 00:04:56.982 --rc genhtml_legend=1 00:04:56.982 --rc geninfo_all_blocks=1 00:04:56.982 --rc geninfo_unexecuted_blocks=1 00:04:56.982 00:04:56.982 ' 00:04:56.982 09:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.982 09:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58134 00:04:56.982 09:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58134 00:04:56.982 09:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58134 ']' 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.982 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.983 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.983 09:10:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.242 [2024-10-08 09:10:48.725837] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:57.242 [2024-10-08 09:10:48.725958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58134 ] 00:04:57.242 [2024-10-08 09:10:48.864213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.520 [2024-10-08 09:10:48.960319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.520 [2024-10-08 09:10:49.029755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.092 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.092 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:58.092 09:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.092 09:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.092 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.092 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.092 { 00:04:58.092 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.092 } 00:04:58.092 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.092 09:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.092 DPDK memory size 860.000000 MiB in 1 heap(s) 00:04:58.092 1 heaps totaling size 860.000000 MiB 00:04:58.092 size: 860.000000 MiB heap id: 0 00:04:58.093 end heaps---------- 00:04:58.093 9 mempools totaling size 642.649841 MiB 00:04:58.093 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.093 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.093 size: 92.545471 MiB name: bdev_io_58134 00:04:58.093 size: 51.011292 MiB name: evtpool_58134 00:04:58.093 size: 50.003479 MiB name: msgpool_58134 00:04:58.093 size: 36.509338 MiB name: fsdev_io_58134 00:04:58.093 size: 21.763794 MiB name: PDU_Pool 00:04:58.093 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.093 size: 0.026123 MiB name: Session_Pool 00:04:58.093 end mempools------- 00:04:58.093 6 memzones totaling size 4.142822 MiB 00:04:58.093 size: 1.000366 MiB name: RG_ring_0_58134 00:04:58.093 size: 1.000366 MiB name: RG_ring_1_58134 00:04:58.093 size: 1.000366 MiB name: RG_ring_4_58134 00:04:58.093 size: 1.000366 MiB name: RG_ring_5_58134 00:04:58.093 size: 0.125366 MiB name: RG_ring_2_58134 00:04:58.093 size: 0.015991 MiB name: RG_ring_3_58134 00:04:58.093 end memzones------- 00:04:58.093 09:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.352 heap id: 0 total size: 860.000000 MiB number of busy elements: 308 number of free elements: 16 00:04:58.352 list of free elements. size: 13.936340 MiB 00:04:58.352 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:58.353 element at address: 0x200000800000 with size: 1.996948 MiB 00:04:58.353 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:04:58.353 element at address: 0x20001be00000 with size: 0.999878 MiB 00:04:58.353 element at address: 0x200034a00000 with size: 0.994446 MiB 00:04:58.353 element at address: 0x200009600000 with size: 0.959839 MiB 00:04:58.353 element at address: 0x200015e00000 with size: 0.954285 MiB 00:04:58.353 element at address: 0x20001c000000 with size: 0.936584 MiB 00:04:58.353 element at address: 0x200000200000 with size: 0.834839 MiB 00:04:58.353 element at address: 0x20001d800000 with size: 0.568420 MiB 00:04:58.353 element at address: 0x20000d800000 with size: 0.489807 MiB 00:04:58.353 element at address: 0x200003e00000 with size: 0.487000 MiB 00:04:58.353 element at address: 0x20001c200000 with size: 0.485657 MiB 00:04:58.353 element at address: 0x200007000000 with size: 0.480286 MiB 00:04:58.353 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:04:58.353 element at address: 0x200003a00000 with size: 0.353210 MiB 00:04:58.353 list of standard malloc elements. size: 199.266968 MiB 00:04:58.353 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:04:58.353 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:04:58.353 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:04:58.353 element at address: 0x20001befff80 with size: 1.000122 MiB 00:04:58.353 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:04:58.353 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:58.353 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:04:58.353 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:58.353 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:04:58.353 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003aff940 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7cac0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003eff000 with size: 0.000183 MiB 00:04:58.353 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707af40 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b000 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b180 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b240 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b300 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b480 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b540 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b600 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:04:58.353 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:04:58.353 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891840 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891900 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892080 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892140 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892200 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892380 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892440 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892500 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892680 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892740 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892800 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892980 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893040 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893100 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893280 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893340 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893400 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893580 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893640 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893700 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893880 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893940 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894000 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894180 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894240 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894300 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894480 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894540 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894600 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894780 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894840 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894900 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d895080 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d895140 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d895200 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d895380 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20001d895440 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:04:58.354 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:04:58.355 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:04:58.355 list of memzone associated elements. size: 646.796692 MiB 00:04:58.355 element at address: 0x20001d895500 with size: 211.416748 MiB 00:04:58.355 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.355 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:04:58.355 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.355 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:04:58.355 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58134_0 00:04:58.355 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:58.355 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58134_0 00:04:58.355 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:58.355 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58134_0 00:04:58.355 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:04:58.355 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58134_0 00:04:58.355 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:04:58.355 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.355 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:04:58.355 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.355 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:58.355 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58134 00:04:58.355 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:58.355 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58134 00:04:58.355 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:58.355 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58134 00:04:58.355 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:04:58.355 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.355 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:04:58.355 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.355 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:04:58.355 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.355 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:04:58.355 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.355 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:58.355 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58134 00:04:58.355 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:58.355 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58134 00:04:58.355 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:04:58.355 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58134 00:04:58.355 element at address: 0x200034afe940 with size: 1.000488 MiB 00:04:58.355 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58134 00:04:58.355 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:04:58.355 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58134 00:04:58.355 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:04:58.355 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58134 00:04:58.355 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:04:58.355 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.355 element at address: 0x20000707b780 with size: 0.500488 MiB 00:04:58.355 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.355 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:04:58.355 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.355 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:04:58.355 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58134 00:04:58.355 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:04:58.355 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.355 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:04:58.355 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.355 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:04:58.355 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58134 00:04:58.355 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:04:58.355 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.355 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:58.355 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58134 00:04:58.355 element at address: 0x200003affa00 with size: 0.000305 MiB 00:04:58.355 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58134 00:04:58.355 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:04:58.355 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58134 00:04:58.355 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:04:58.355 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.355 09:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.355 09:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58134 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58134 ']' 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58134 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58134 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.355 killing process with pid 58134 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58134' 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58134 00:04:58.355 09:10:49 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58134 00:04:58.614 00:04:58.614 real 0m1.760s 00:04:58.614 user 0m1.827s 00:04:58.614 sys 0m0.475s 00:04:58.614 09:10:50 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.614 ************************************ 00:04:58.614 END TEST dpdk_mem_utility 00:04:58.614 ************************************ 00:04:58.614 09:10:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.614 09:10:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.614 09:10:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.614 09:10:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.614 09:10:50 -- common/autotest_common.sh@10 -- # set +x 00:04:58.614 ************************************ 00:04:58.614 START TEST event 00:04:58.614 ************************************ 00:04:58.614 09:10:50 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.906 * Looking for test storage... 00:04:58.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.906 09:10:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.906 09:10:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.906 09:10:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.906 09:10:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.906 09:10:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.906 09:10:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.906 09:10:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.906 09:10:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.906 09:10:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.906 09:10:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.906 09:10:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.906 09:10:50 event -- scripts/common.sh@344 -- # case "$op" in 00:04:58.906 09:10:50 event -- scripts/common.sh@345 -- # : 1 00:04:58.906 09:10:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.906 09:10:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.906 09:10:50 event -- scripts/common.sh@365 -- # decimal 1 00:04:58.906 09:10:50 event -- scripts/common.sh@353 -- # local d=1 00:04:58.906 09:10:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.906 09:10:50 event -- scripts/common.sh@355 -- # echo 1 00:04:58.906 09:10:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.906 09:10:50 event -- scripts/common.sh@366 -- # decimal 2 00:04:58.906 09:10:50 event -- scripts/common.sh@353 -- # local d=2 00:04:58.906 09:10:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.906 09:10:50 event -- scripts/common.sh@355 -- # echo 2 00:04:58.906 09:10:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.906 09:10:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.906 09:10:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.906 09:10:50 event -- scripts/common.sh@368 -- # return 0 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.906 --rc genhtml_branch_coverage=1 00:04:58.906 --rc genhtml_function_coverage=1 00:04:58.906 --rc genhtml_legend=1 00:04:58.906 --rc geninfo_all_blocks=1 00:04:58.906 --rc geninfo_unexecuted_blocks=1 00:04:58.906 00:04:58.906 ' 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.906 --rc genhtml_branch_coverage=1 00:04:58.906 --rc genhtml_function_coverage=1 00:04:58.906 --rc genhtml_legend=1 00:04:58.906 --rc geninfo_all_blocks=1 00:04:58.906 --rc geninfo_unexecuted_blocks=1 00:04:58.906 00:04:58.906 ' 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.906 --rc genhtml_branch_coverage=1 00:04:58.906 --rc genhtml_function_coverage=1 00:04:58.906 --rc genhtml_legend=1 00:04:58.906 --rc geninfo_all_blocks=1 00:04:58.906 --rc geninfo_unexecuted_blocks=1 00:04:58.906 00:04:58.906 ' 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.906 --rc genhtml_branch_coverage=1 00:04:58.906 --rc genhtml_function_coverage=1 00:04:58.906 --rc genhtml_legend=1 00:04:58.906 --rc geninfo_all_blocks=1 00:04:58.906 --rc geninfo_unexecuted_blocks=1 00:04:58.906 00:04:58.906 ' 00:04:58.906 09:10:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:58.906 09:10:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.906 09:10:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:58.906 09:10:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.906 09:10:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.906 ************************************ 00:04:58.906 START TEST event_perf 00:04:58.906 ************************************ 00:04:58.906 09:10:50 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.906 Running I/O for 1 seconds...[2024-10-08 09:10:50.486209] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:58.906 [2024-10-08 09:10:50.486344] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58219 ] 00:04:59.164 [2024-10-08 09:10:50.623065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.164 [2024-10-08 09:10:50.708001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.164 [2024-10-08 09:10:50.708135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.164 Running I/O for 1 seconds...[2024-10-08 09:10:50.708374] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.164 [2024-10-08 09:10:50.708379] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.537 00:05:00.537 lcore 0: 117059 00:05:00.537 lcore 1: 117059 00:05:00.537 lcore 2: 117060 00:05:00.537 lcore 3: 117060 00:05:00.537 done. 00:05:00.537 00:05:00.537 real 0m1.325s 00:05:00.537 user 0m4.125s 00:05:00.537 sys 0m0.077s 00:05:00.537 09:10:51 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.537 09:10:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.537 ************************************ 00:05:00.537 END TEST event_perf 00:05:00.537 ************************************ 00:05:00.537 09:10:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:00.537 09:10:51 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:00.537 09:10:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.537 09:10:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.537 ************************************ 00:05:00.537 START TEST event_reactor 00:05:00.537 ************************************ 00:05:00.537 09:10:51 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:00.537 [2024-10-08 09:10:51.863459] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:00.537 [2024-10-08 09:10:51.863568] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58252 ] 00:05:00.537 [2024-10-08 09:10:51.999599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.537 [2024-10-08 09:10:52.078016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.474 test_start 00:05:01.474 oneshot 00:05:01.474 tick 100 00:05:01.474 tick 100 00:05:01.474 tick 250 00:05:01.474 tick 100 00:05:01.474 tick 100 00:05:01.474 tick 100 00:05:01.474 tick 250 00:05:01.474 tick 500 00:05:01.474 tick 100 00:05:01.474 tick 100 00:05:01.474 tick 250 00:05:01.474 tick 100 00:05:01.474 tick 100 00:05:01.474 test_end 00:05:01.732 00:05:01.732 real 0m1.313s 00:05:01.732 user 0m1.153s 00:05:01.732 sys 0m0.054s 00:05:01.732 09:10:53 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.732 09:10:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:01.732 ************************************ 00:05:01.732 END TEST event_reactor 00:05:01.732 ************************************ 00:05:01.732 09:10:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.732 09:10:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:01.732 09:10:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.732 09:10:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.732 ************************************ 00:05:01.732 START TEST event_reactor_perf 00:05:01.732 ************************************ 00:05:01.732 09:10:53 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.732 [2024-10-08 09:10:53.231016] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:01.732 [2024-10-08 09:10:53.231115] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:05:01.732 [2024-10-08 09:10:53.368085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.990 [2024-10-08 09:10:53.484190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.929 test_start 00:05:02.929 test_end 00:05:02.929 Performance: 388728 events per second 00:05:02.929 00:05:02.929 real 0m1.343s 00:05:02.929 user 0m1.180s 00:05:02.929 sys 0m0.057s 00:05:02.929 09:10:54 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.929 09:10:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 ************************************ 00:05:02.929 END TEST event_reactor_perf 00:05:02.929 ************************************ 00:05:02.929 09:10:54 event -- event/event.sh@49 -- # uname -s 00:05:02.929 09:10:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.929 09:10:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.929 09:10:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.929 09:10:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.929 09:10:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.188 ************************************ 00:05:03.188 START TEST event_scheduler 00:05:03.188 ************************************ 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:03.188 * Looking for test storage... 00:05:03.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.188 09:10:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.188 --rc genhtml_branch_coverage=1 00:05:03.188 --rc genhtml_function_coverage=1 00:05:03.188 --rc genhtml_legend=1 00:05:03.188 --rc geninfo_all_blocks=1 00:05:03.188 --rc geninfo_unexecuted_blocks=1 00:05:03.188 00:05:03.188 ' 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.188 --rc genhtml_branch_coverage=1 00:05:03.188 --rc genhtml_function_coverage=1 00:05:03.188 --rc genhtml_legend=1 00:05:03.188 --rc geninfo_all_blocks=1 00:05:03.188 --rc geninfo_unexecuted_blocks=1 00:05:03.188 00:05:03.188 ' 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.188 --rc genhtml_branch_coverage=1 00:05:03.188 --rc genhtml_function_coverage=1 00:05:03.188 --rc genhtml_legend=1 00:05:03.188 --rc geninfo_all_blocks=1 00:05:03.188 --rc geninfo_unexecuted_blocks=1 00:05:03.188 00:05:03.188 ' 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.188 --rc genhtml_branch_coverage=1 00:05:03.188 --rc genhtml_function_coverage=1 00:05:03.188 --rc genhtml_legend=1 00:05:03.188 --rc geninfo_all_blocks=1 00:05:03.188 --rc geninfo_unexecuted_blocks=1 00:05:03.188 00:05:03.188 ' 00:05:03.188 09:10:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:03.188 09:10:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58357 00:05:03.188 09:10:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:03.188 09:10:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.188 09:10:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58357 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58357 ']' 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.188 09:10:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.188 [2024-10-08 09:10:54.856813] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:03.188 [2024-10-08 09:10:54.856913] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58357 ] 00:05:03.446 [2024-10-08 09:10:54.998194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.706 [2024-10-08 09:10:55.131712] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.706 [2024-10-08 09:10:55.131832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.706 [2024-10-08 09:10:55.131951] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.706 [2024-10-08 09:10:55.131963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.273 09:10:55 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.273 09:10:55 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:04.273 09:10:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:04.273 09:10:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.273 09:10:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.273 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.273 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.273 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.273 POWER: Cannot set governor of lcore 0 to performance 00:05:04.273 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.274 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.274 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.274 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.274 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:04.274 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:04.274 POWER: Unable to set Power Management Environment for lcore 0 00:05:04.274 [2024-10-08 09:10:55.914220] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:04.274 [2024-10-08 09:10:55.914234] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:04.274 [2024-10-08 09:10:55.914255] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:04.274 [2024-10-08 09:10:55.914273] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:04.274 [2024-10-08 09:10:55.914280] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:04.274 [2024-10-08 09:10:55.914286] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:04.274 09:10:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.274 09:10:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:04.274 09:10:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.274 09:10:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 [2024-10-08 09:10:55.975450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.533 [2024-10-08 09:10:56.010038] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:04.533 09:10:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:04.533 09:10:56 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.533 09:10:56 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 ************************************ 00:05:04.533 START TEST scheduler_create_thread 00:05:04.533 ************************************ 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 2 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 3 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 4 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 5 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 6 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 7 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 8 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 9 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 10 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.533 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.470 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.470 09:10:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:05.470 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.470 09:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.850 09:10:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.850 09:10:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:06.850 09:10:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:06.850 09:10:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.850 09:10:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.836 09:10:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.836 00:05:07.836 real 0m3.377s 00:05:07.836 user 0m0.018s 00:05:07.836 sys 0m0.008s 00:05:07.836 09:10:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.836 09:10:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.836 ************************************ 00:05:07.836 END TEST scheduler_create_thread 00:05:07.836 ************************************ 00:05:07.836 09:10:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.836 09:10:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58357 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58357 ']' 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58357 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58357 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:07.836 killing process with pid 58357 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58357' 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58357 00:05:07.836 09:10:59 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58357 00:05:08.404 [2024-10-08 09:10:59.779727] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:08.404 00:05:08.404 real 0m5.455s 00:05:08.404 user 0m11.154s 00:05:08.404 sys 0m0.430s 00:05:08.404 09:11:00 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.404 09:11:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.404 ************************************ 00:05:08.404 END TEST event_scheduler 00:05:08.404 ************************************ 00:05:08.663 09:11:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:08.663 09:11:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:08.663 09:11:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.663 09:11:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.663 09:11:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.663 ************************************ 00:05:08.663 START TEST app_repeat 00:05:08.663 ************************************ 00:05:08.663 09:11:00 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:08.663 09:11:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58462 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.664 Process app_repeat pid: 58462 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58462' 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.664 spdk_app_start Round 0 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:08.664 09:11:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58462 /var/tmp/spdk-nbd.sock 00:05:08.664 09:11:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58462 ']' 00:05:08.664 09:11:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.664 09:11:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.664 09:11:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.664 09:11:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.664 09:11:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.664 [2024-10-08 09:11:00.160639] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:08.664 [2024-10-08 09:11:00.160785] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58462 ] 00:05:08.664 [2024-10-08 09:11:00.296211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.922 [2024-10-08 09:11:00.405496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.922 [2024-10-08 09:11:00.405511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.922 [2024-10-08 09:11:00.461289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.490 09:11:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.490 09:11:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:09.490 09:11:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.057 Malloc0 00:05:10.057 09:11:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.315 Malloc1 00:05:10.315 09:11:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.315 09:11:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.573 /dev/nbd0 00:05:10.573 09:11:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.573 09:11:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.573 1+0 records in 00:05:10.573 1+0 records out 00:05:10.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323269 s, 12.7 MB/s 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.573 09:11:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.573 09:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.573 09:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.573 09:11:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.831 /dev/nbd1 00:05:10.831 09:11:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.831 09:11:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.831 1+0 records in 00:05:10.831 1+0 records out 00:05:10.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329897 s, 12.4 MB/s 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.831 09:11:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.831 09:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.831 09:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.831 09:11:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.831 09:11:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.831 09:11:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.090 { 00:05:11.090 "nbd_device": "/dev/nbd0", 00:05:11.090 "bdev_name": "Malloc0" 00:05:11.090 }, 00:05:11.090 { 00:05:11.090 "nbd_device": "/dev/nbd1", 00:05:11.090 "bdev_name": "Malloc1" 00:05:11.090 } 00:05:11.090 ]' 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.090 { 00:05:11.090 "nbd_device": "/dev/nbd0", 00:05:11.090 "bdev_name": "Malloc0" 00:05:11.090 }, 00:05:11.090 { 00:05:11.090 "nbd_device": "/dev/nbd1", 00:05:11.090 "bdev_name": "Malloc1" 00:05:11.090 } 00:05:11.090 ]' 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.090 /dev/nbd1' 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.090 /dev/nbd1' 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.090 256+0 records in 00:05:11.090 256+0 records out 00:05:11.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00726638 s, 144 MB/s 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.090 256+0 records in 00:05:11.090 256+0 records out 00:05:11.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244593 s, 42.9 MB/s 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.090 09:11:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.350 256+0 records in 00:05:11.350 256+0 records out 00:05:11.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254798 s, 41.2 MB/s 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.350 09:11:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.608 09:11:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.867 09:11:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.126 09:11:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.126 09:11:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.386 09:11:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.647 [2024-10-08 09:11:04.164288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.647 [2024-10-08 09:11:04.278443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.647 [2024-10-08 09:11:04.278452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.910 [2024-10-08 09:11:04.336517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.910 [2024-10-08 09:11:04.336618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.910 [2024-10-08 09:11:04.336632] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.443 spdk_app_start Round 1 00:05:15.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.443 09:11:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.443 09:11:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.443 09:11:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58462 /var/tmp/spdk-nbd.sock 00:05:15.443 09:11:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58462 ']' 00:05:15.443 09:11:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.443 09:11:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.443 09:11:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.443 09:11:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.443 09:11:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.701 09:11:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.701 09:11:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:15.701 09:11:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.960 Malloc0 00:05:15.960 09:11:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.219 Malloc1 00:05:16.219 09:11:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.219 09:11:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.478 /dev/nbd0 00:05:16.478 09:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.478 09:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.478 1+0 records in 00:05:16.478 1+0 records out 00:05:16.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521359 s, 7.9 MB/s 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:16.478 09:11:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.736 09:11:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:16.736 09:11:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:16.736 09:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.736 09:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.736 09:11:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.993 /dev/nbd1 00:05:16.993 09:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.993 09:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.993 1+0 records in 00:05:16.993 1+0 records out 00:05:16.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333847 s, 12.3 MB/s 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:16.993 09:11:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:16.993 09:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.993 09:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.993 09:11:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.993 09:11:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.993 09:11:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.252 { 00:05:17.252 "nbd_device": "/dev/nbd0", 00:05:17.252 "bdev_name": "Malloc0" 00:05:17.252 }, 00:05:17.252 { 00:05:17.252 "nbd_device": "/dev/nbd1", 00:05:17.252 "bdev_name": "Malloc1" 00:05:17.252 } 00:05:17.252 ]' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.252 { 00:05:17.252 "nbd_device": "/dev/nbd0", 00:05:17.252 "bdev_name": "Malloc0" 00:05:17.252 }, 00:05:17.252 { 00:05:17.252 "nbd_device": "/dev/nbd1", 00:05:17.252 "bdev_name": "Malloc1" 00:05:17.252 } 00:05:17.252 ]' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.252 /dev/nbd1' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.252 /dev/nbd1' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.252 256+0 records in 00:05:17.252 256+0 records out 00:05:17.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123287 s, 85.1 MB/s 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.252 256+0 records in 00:05:17.252 256+0 records out 00:05:17.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271231 s, 38.7 MB/s 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.252 256+0 records in 00:05:17.252 256+0 records out 00:05:17.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303875 s, 34.5 MB/s 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.252 09:11:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.510 09:11:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.769 09:11:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.030 09:11:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.292 09:11:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.293 09:11:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.293 09:11:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.551 09:11:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.809 [2024-10-08 09:11:10.398046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.067 [2024-10-08 09:11:10.505167] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.067 [2024-10-08 09:11:10.505192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.067 [2024-10-08 09:11:10.565447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.067 [2024-10-08 09:11:10.565600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.067 [2024-10-08 09:11:10.565624] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.601 09:11:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.601 spdk_app_start Round 2 00:05:21.601 09:11:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.601 09:11:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58462 /var/tmp/spdk-nbd.sock 00:05:21.601 09:11:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58462 ']' 00:05:21.601 09:11:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.601 09:11:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.601 09:11:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.601 09:11:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.601 09:11:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.860 09:11:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.860 09:11:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:21.860 09:11:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.120 Malloc0 00:05:22.120 09:11:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.379 Malloc1 00:05:22.637 09:11:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.637 09:11:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.637 /dev/nbd0 00:05:22.896 09:11:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.896 09:11:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.896 1+0 records in 00:05:22.896 1+0 records out 00:05:22.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329395 s, 12.4 MB/s 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.896 09:11:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.896 09:11:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.896 09:11:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.896 09:11:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.155 /dev/nbd1 00:05:23.155 09:11:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.155 09:11:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.155 1+0 records in 00:05:23.155 1+0 records out 00:05:23.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003493 s, 11.7 MB/s 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.155 09:11:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.155 09:11:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.155 09:11:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.155 09:11:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.155 09:11:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.156 09:11:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.415 09:11:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.415 { 00:05:23.415 "nbd_device": "/dev/nbd0", 00:05:23.415 "bdev_name": "Malloc0" 00:05:23.415 }, 00:05:23.415 { 00:05:23.415 "nbd_device": "/dev/nbd1", 00:05:23.415 "bdev_name": "Malloc1" 00:05:23.415 } 00:05:23.415 ]' 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.415 { 00:05:23.415 "nbd_device": "/dev/nbd0", 00:05:23.415 "bdev_name": "Malloc0" 00:05:23.415 }, 00:05:23.415 { 00:05:23.415 "nbd_device": "/dev/nbd1", 00:05:23.415 "bdev_name": "Malloc1" 00:05:23.415 } 00:05:23.415 ]' 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.415 /dev/nbd1' 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.415 /dev/nbd1' 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.415 256+0 records in 00:05:23.415 256+0 records out 00:05:23.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496415 s, 211 MB/s 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.415 256+0 records in 00:05:23.415 256+0 records out 00:05:23.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210973 s, 49.7 MB/s 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.415 09:11:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.674 256+0 records in 00:05:23.674 256+0 records out 00:05:23.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235632 s, 44.5 MB/s 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.674 09:11:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.933 09:11:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.192 09:11:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.451 09:11:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.451 09:11:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.710 09:11:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.968 [2024-10-08 09:11:16.500305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.968 [2024-10-08 09:11:16.594392] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.968 [2024-10-08 09:11:16.594403] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.968 [2024-10-08 09:11:16.650612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.968 [2024-10-08 09:11:16.650731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.968 [2024-10-08 09:11:16.650745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.271 09:11:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58462 /var/tmp/spdk-nbd.sock 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58462 ']' 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:28.271 09:11:19 event.app_repeat -- event/event.sh@39 -- # killprocess 58462 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58462 ']' 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58462 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58462 00:05:28.271 killing process with pid 58462 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58462' 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58462 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58462 00:05:28.271 spdk_app_start is called in Round 0. 00:05:28.271 Shutdown signal received, stop current app iteration 00:05:28.271 Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 reinitialization... 00:05:28.271 spdk_app_start is called in Round 1. 00:05:28.271 Shutdown signal received, stop current app iteration 00:05:28.271 Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 reinitialization... 00:05:28.271 spdk_app_start is called in Round 2. 00:05:28.271 Shutdown signal received, stop current app iteration 00:05:28.271 Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 reinitialization... 00:05:28.271 spdk_app_start is called in Round 3. 00:05:28.271 Shutdown signal received, stop current app iteration 00:05:28.271 ************************************ 00:05:28.271 END TEST app_repeat 00:05:28.271 ************************************ 00:05:28.271 09:11:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.271 09:11:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:28.271 00:05:28.271 real 0m19.712s 00:05:28.271 user 0m44.484s 00:05:28.271 sys 0m2.945s 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.271 09:11:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.271 09:11:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.271 09:11:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.271 09:11:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.271 09:11:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.271 09:11:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.271 ************************************ 00:05:28.271 START TEST cpu_locks 00:05:28.271 ************************************ 00:05:28.271 09:11:19 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.542 * Looking for test storage... 00:05:28.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.542 09:11:19 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.542 09:11:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.542 09:11:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.542 09:11:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.542 --rc genhtml_branch_coverage=1 00:05:28.542 --rc genhtml_function_coverage=1 00:05:28.542 --rc genhtml_legend=1 00:05:28.542 --rc geninfo_all_blocks=1 00:05:28.542 --rc geninfo_unexecuted_blocks=1 00:05:28.542 00:05:28.542 ' 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.542 --rc genhtml_branch_coverage=1 00:05:28.542 --rc genhtml_function_coverage=1 00:05:28.542 --rc genhtml_legend=1 00:05:28.542 --rc geninfo_all_blocks=1 00:05:28.542 --rc geninfo_unexecuted_blocks=1 00:05:28.542 00:05:28.542 ' 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.542 --rc genhtml_branch_coverage=1 00:05:28.542 --rc genhtml_function_coverage=1 00:05:28.542 --rc genhtml_legend=1 00:05:28.542 --rc geninfo_all_blocks=1 00:05:28.542 --rc geninfo_unexecuted_blocks=1 00:05:28.542 00:05:28.542 ' 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.542 --rc genhtml_branch_coverage=1 00:05:28.542 --rc genhtml_function_coverage=1 00:05:28.542 --rc genhtml_legend=1 00:05:28.542 --rc geninfo_all_blocks=1 00:05:28.542 --rc geninfo_unexecuted_blocks=1 00:05:28.542 00:05:28.542 ' 00:05:28.542 09:11:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.542 09:11:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.542 09:11:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.542 09:11:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.542 09:11:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.542 ************************************ 00:05:28.542 START TEST default_locks 00:05:28.542 ************************************ 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58914 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58914 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58914 ']' 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.542 09:11:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.542 [2024-10-08 09:11:20.168238] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:28.542 [2024-10-08 09:11:20.168366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58914 ] 00:05:28.801 [2024-10-08 09:11:20.302728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.801 [2024-10-08 09:11:20.389535] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.801 [2024-10-08 09:11:20.459221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.736 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.736 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:29.736 09:11:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58914 00:05:29.736 09:11:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58914 00:05:29.736 09:11:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58914 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58914 ']' 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58914 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58914 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.994 killing process with pid 58914 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58914' 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58914 00:05:29.994 09:11:21 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58914 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58914 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58914 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58914 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58914 ']' 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58914) - No such process 00:05:30.562 ERROR: process (pid: 58914) is no longer running 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.562 00:05:30.562 real 0m1.929s 00:05:30.562 user 0m2.048s 00:05:30.562 sys 0m0.580s 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.562 09:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.562 ************************************ 00:05:30.562 END TEST default_locks 00:05:30.562 ************************************ 00:05:30.562 09:11:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.562 09:11:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.562 09:11:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.562 09:11:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.562 ************************************ 00:05:30.562 START TEST default_locks_via_rpc 00:05:30.562 ************************************ 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58966 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58966 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58966 ']' 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.562 09:11:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.562 [2024-10-08 09:11:22.148588] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:30.562 [2024-10-08 09:11:22.148693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:05:30.822 [2024-10-08 09:11:22.287254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.822 [2024-10-08 09:11:22.391052] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.822 [2024-10-08 09:11:22.464623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58966 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58966 00:05:31.758 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58966 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58966 ']' 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58966 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58966 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.016 killing process with pid 58966 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58966' 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58966 00:05:32.016 09:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58966 00:05:32.584 00:05:32.584 real 0m2.004s 00:05:32.584 user 0m2.210s 00:05:32.584 sys 0m0.581s 00:05:32.584 09:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.584 09:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.584 ************************************ 00:05:32.584 END TEST default_locks_via_rpc 00:05:32.584 ************************************ 00:05:32.584 09:11:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.584 09:11:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.584 09:11:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.584 09:11:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.584 ************************************ 00:05:32.584 START TEST non_locking_app_on_locked_coremask 00:05:32.584 ************************************ 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59017 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59017 /var/tmp/spdk.sock 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59017 ']' 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.584 09:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.584 [2024-10-08 09:11:24.227198] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:32.584 [2024-10-08 09:11:24.227318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59017 ] 00:05:32.842 [2024-10-08 09:11:24.366848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.842 [2024-10-08 09:11:24.465335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.103 [2024-10-08 09:11:24.538194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59033 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59033 /var/tmp/spdk2.sock 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59033 ']' 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.672 09:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.672 [2024-10-08 09:11:25.253289] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:33.672 [2024-10-08 09:11:25.253386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59033 ] 00:05:33.931 [2024-10-08 09:11:25.390730] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.931 [2024-10-08 09:11:25.395391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.931 [2024-10-08 09:11:25.589981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.189 [2024-10-08 09:11:25.734723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.756 09:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.756 09:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:34.756 09:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59017 00:05:34.756 09:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59017 00:05:34.756 09:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59017 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59017 ']' 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59017 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59017 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.692 killing process with pid 59017 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59017' 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59017 00:05:35.692 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59017 00:05:36.629 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59033 00:05:36.629 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59033 ']' 00:05:36.629 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59033 00:05:36.630 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:36.630 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.630 09:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59033 00:05:36.630 09:11:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.630 09:11:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.630 killing process with pid 59033 00:05:36.630 09:11:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59033' 00:05:36.630 09:11:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59033 00:05:36.630 09:11:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59033 00:05:36.888 00:05:36.888 real 0m4.289s 00:05:36.888 user 0m4.766s 00:05:36.888 sys 0m1.203s 00:05:36.888 09:11:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.888 09:11:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.888 ************************************ 00:05:36.888 END TEST non_locking_app_on_locked_coremask 00:05:36.888 ************************************ 00:05:36.888 09:11:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:36.888 09:11:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.888 09:11:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.888 09:11:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.888 ************************************ 00:05:36.888 START TEST locking_app_on_unlocked_coremask 00:05:36.888 ************************************ 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59100 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59100 /var/tmp/spdk.sock 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59100 ']' 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.888 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.888 [2024-10-08 09:11:28.542894] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:36.888 [2024-10-08 09:11:28.543018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59100 ] 00:05:37.147 [2024-10-08 09:11:28.676269] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.147 [2024-10-08 09:11:28.676321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.147 [2024-10-08 09:11:28.794195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.405 [2024-10-08 09:11:28.868063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59116 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59116 /var/tmp/spdk2.sock 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59116 ']' 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.972 09:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.972 [2024-10-08 09:11:29.624294] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:37.972 [2024-10-08 09:11:29.624427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:05:38.232 [2024-10-08 09:11:29.771942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.491 [2024-10-08 09:11:30.020250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.749 [2024-10-08 09:11:30.174680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.008 09:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.008 09:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.008 09:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59116 00:05:39.008 09:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59116 00:05:39.008 09:11:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59100 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59100 ']' 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59100 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59100 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.944 killing process with pid 59100 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59100' 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59100 00:05:39.944 09:11:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59100 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59116 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59116 ']' 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59116 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59116 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.882 killing process with pid 59116 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59116' 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59116 00:05:40.882 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59116 00:05:41.141 00:05:41.141 real 0m4.234s 00:05:41.141 user 0m4.738s 00:05:41.141 sys 0m1.114s 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.141 ************************************ 00:05:41.141 END TEST locking_app_on_unlocked_coremask 00:05:41.141 ************************************ 00:05:41.141 09:11:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:41.141 09:11:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.141 09:11:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.141 09:11:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.141 ************************************ 00:05:41.141 START TEST locking_app_on_locked_coremask 00:05:41.141 ************************************ 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59183 00:05:41.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59183 /var/tmp/spdk.sock 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59183 ']' 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.141 09:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 [2024-10-08 09:11:32.850756] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:41.399 [2024-10-08 09:11:32.850887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:05:41.399 [2024-10-08 09:11:32.987254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.658 [2024-10-08 09:11:33.108111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.658 [2024-10-08 09:11:33.182432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59199 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59199 /var/tmp/spdk2.sock 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59199 /var/tmp/spdk2.sock 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59199 /var/tmp/spdk2.sock 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59199 ']' 00:05:42.224 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.225 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.225 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.225 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.225 09:11:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.483 [2024-10-08 09:11:33.954044] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:42.483 [2024-10-08 09:11:33.954638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:05:42.483 [2024-10-08 09:11:34.094346] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59183 has claimed it. 00:05:42.483 [2024-10-08 09:11:34.094428] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.049 ERROR: process (pid: 59199) is no longer running 00:05:43.050 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59199) - No such process 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59183 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59183 00:05:43.050 09:11:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59183 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59183 ']' 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59183 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59183 00:05:43.644 killing process with pid 59183 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59183' 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59183 00:05:43.644 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59183 00:05:43.903 00:05:43.903 real 0m2.808s 00:05:43.903 user 0m3.295s 00:05:43.903 sys 0m0.668s 00:05:43.903 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.903 09:11:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.903 ************************************ 00:05:43.903 END TEST locking_app_on_locked_coremask 00:05:43.903 ************************************ 00:05:44.162 09:11:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:44.162 09:11:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.162 09:11:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.162 09:11:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.162 ************************************ 00:05:44.162 START TEST locking_overlapped_coremask 00:05:44.162 ************************************ 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59250 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59250 /var/tmp/spdk.sock 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59250 ']' 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.162 09:11:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.162 [2024-10-08 09:11:35.697663] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:44.162 [2024-10-08 09:11:35.698139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59250 ] 00:05:44.162 [2024-10-08 09:11:35.830646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.420 [2024-10-08 09:11:35.947961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.420 [2024-10-08 09:11:35.948046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.420 [2024-10-08 09:11:35.948045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.420 [2024-10-08 09:11:36.029181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59268 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59268 /var/tmp/spdk2.sock 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59268 /var/tmp/spdk2.sock 00:05:45.355 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59268 /var/tmp/spdk2.sock 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59268 ']' 00:05:45.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.356 09:11:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.356 [2024-10-08 09:11:36.782463] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:45.356 [2024-10-08 09:11:36.782874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59268 ] 00:05:45.356 [2024-10-08 09:11:36.922022] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59250 has claimed it. 00:05:45.356 [2024-10-08 09:11:36.922135] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.923 ERROR: process (pid: 59268) is no longer running 00:05:45.923 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59268) - No such process 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59250 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59250 ']' 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59250 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59250 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59250' 00:05:45.923 killing process with pid 59250 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59250 00:05:45.923 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59250 00:05:46.491 00:05:46.491 real 0m2.332s 00:05:46.491 user 0m6.535s 00:05:46.491 sys 0m0.451s 00:05:46.491 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.491 09:11:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.491 ************************************ 00:05:46.491 END TEST locking_overlapped_coremask 00:05:46.491 ************************************ 00:05:46.491 09:11:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:46.491 09:11:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.491 09:11:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.491 09:11:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.491 ************************************ 00:05:46.491 START TEST locking_overlapped_coremask_via_rpc 00:05:46.491 ************************************ 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59308 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59308 /var/tmp/spdk.sock 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59308 ']' 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.491 09:11:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.491 [2024-10-08 09:11:38.095500] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:46.491 [2024-10-08 09:11:38.095922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59308 ] 00:05:46.749 [2024-10-08 09:11:38.232622] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.749 [2024-10-08 09:11:38.232675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.749 [2024-10-08 09:11:38.353401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.749 [2024-10-08 09:11:38.353518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.749 [2024-10-08 09:11:38.353529] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.749 [2024-10-08 09:11:38.425062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59326 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59326 /var/tmp/spdk2.sock 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59326 ']' 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:47.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.685 09:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.685 [2024-10-08 09:11:39.136112] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:47.685 [2024-10-08 09:11:39.136237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:05:47.685 [2024-10-08 09:11:39.281383] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.685 [2024-10-08 09:11:39.281445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.943 [2024-10-08 09:11:39.523801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.943 [2024-10-08 09:11:39.527894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.943 [2024-10-08 09:11:39.527895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.201 [2024-10-08 09:11:39.665135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.769 [2024-10-08 09:11:40.174958] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59308 has claimed it. 00:05:48.769 request: 00:05:48.769 { 00:05:48.769 "method": "framework_enable_cpumask_locks", 00:05:48.769 "req_id": 1 00:05:48.769 } 00:05:48.769 Got JSON-RPC error response 00:05:48.769 response: 00:05:48.769 { 00:05:48.769 "code": -32603, 00:05:48.769 "message": "Failed to claim CPU core: 2" 00:05:48.769 } 00:05:48.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59308 /var/tmp/spdk.sock 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59308 ']' 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59326 /var/tmp/spdk2.sock 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59326 ']' 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.769 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.336 ************************************ 00:05:49.336 END TEST locking_overlapped_coremask_via_rpc 00:05:49.336 ************************************ 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:49.336 00:05:49.336 real 0m2.709s 00:05:49.336 user 0m1.430s 00:05:49.336 sys 0m0.205s 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.336 09:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.336 09:11:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:49.336 09:11:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59308 ]] 00:05:49.336 09:11:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59308 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59308 ']' 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59308 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59308 00:05:49.336 killing process with pid 59308 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59308' 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59308 00:05:49.336 09:11:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59308 00:05:49.594 09:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59326 ]] 00:05:49.594 09:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59326 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59326 ']' 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59326 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59326 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:49.594 killing process with pid 59326 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59326' 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59326 00:05:49.594 09:11:41 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59326 00:05:50.162 09:11:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.162 Process with pid 59308 is not found 00:05:50.162 09:11:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:50.162 09:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59308 ]] 00:05:50.162 09:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59308 00:05:50.162 09:11:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59308 ']' 00:05:50.162 09:11:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59308 00:05:50.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59308) - No such process 00:05:50.162 09:11:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59308 is not found' 00:05:50.162 09:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59326 ]] 00:05:50.162 09:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59326 00:05:50.162 Process with pid 59326 is not found 00:05:50.163 09:11:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59326 ']' 00:05:50.163 09:11:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59326 00:05:50.163 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59326) - No such process 00:05:50.163 09:11:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59326 is not found' 00:05:50.163 09:11:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.163 ************************************ 00:05:50.163 END TEST cpu_locks 00:05:50.163 ************************************ 00:05:50.163 00:05:50.163 real 0m21.789s 00:05:50.163 user 0m37.966s 00:05:50.163 sys 0m5.787s 00:05:50.163 09:11:41 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.163 09:11:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.163 ************************************ 00:05:50.163 END TEST event 00:05:50.163 ************************************ 00:05:50.163 00:05:50.163 real 0m51.442s 00:05:50.163 user 1m40.256s 00:05:50.163 sys 0m9.648s 00:05:50.163 09:11:41 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.163 09:11:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.163 09:11:41 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.163 09:11:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.163 09:11:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.163 09:11:41 -- common/autotest_common.sh@10 -- # set +x 00:05:50.163 ************************************ 00:05:50.163 START TEST thread 00:05:50.163 ************************************ 00:05:50.163 09:11:41 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.422 * Looking for test storage... 00:05:50.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:50.422 09:11:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.422 09:11:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.422 09:11:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.422 09:11:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.422 09:11:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.422 09:11:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.422 09:11:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.422 09:11:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.422 09:11:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.422 09:11:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.422 09:11:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.422 09:11:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:50.422 09:11:41 thread -- scripts/common.sh@345 -- # : 1 00:05:50.422 09:11:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.422 09:11:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.422 09:11:41 thread -- scripts/common.sh@365 -- # decimal 1 00:05:50.422 09:11:41 thread -- scripts/common.sh@353 -- # local d=1 00:05:50.422 09:11:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.422 09:11:41 thread -- scripts/common.sh@355 -- # echo 1 00:05:50.422 09:11:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.422 09:11:41 thread -- scripts/common.sh@366 -- # decimal 2 00:05:50.422 09:11:41 thread -- scripts/common.sh@353 -- # local d=2 00:05:50.422 09:11:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.422 09:11:41 thread -- scripts/common.sh@355 -- # echo 2 00:05:50.422 09:11:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.422 09:11:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.422 09:11:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.422 09:11:41 thread -- scripts/common.sh@368 -- # return 0 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.422 --rc genhtml_branch_coverage=1 00:05:50.422 --rc genhtml_function_coverage=1 00:05:50.422 --rc genhtml_legend=1 00:05:50.422 --rc geninfo_all_blocks=1 00:05:50.422 --rc geninfo_unexecuted_blocks=1 00:05:50.422 00:05:50.422 ' 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.422 --rc genhtml_branch_coverage=1 00:05:50.422 --rc genhtml_function_coverage=1 00:05:50.422 --rc genhtml_legend=1 00:05:50.422 --rc geninfo_all_blocks=1 00:05:50.422 --rc geninfo_unexecuted_blocks=1 00:05:50.422 00:05:50.422 ' 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.422 --rc genhtml_branch_coverage=1 00:05:50.422 --rc genhtml_function_coverage=1 00:05:50.422 --rc genhtml_legend=1 00:05:50.422 --rc geninfo_all_blocks=1 00:05:50.422 --rc geninfo_unexecuted_blocks=1 00:05:50.422 00:05:50.422 ' 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.422 --rc genhtml_branch_coverage=1 00:05:50.422 --rc genhtml_function_coverage=1 00:05:50.422 --rc genhtml_legend=1 00:05:50.422 --rc geninfo_all_blocks=1 00:05:50.422 --rc geninfo_unexecuted_blocks=1 00:05:50.422 00:05:50.422 ' 00:05:50.422 09:11:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.422 09:11:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.422 ************************************ 00:05:50.422 START TEST thread_poller_perf 00:05:50.422 ************************************ 00:05:50.422 09:11:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.422 [2024-10-08 09:11:41.998336] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:50.422 [2024-10-08 09:11:41.998790] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59462 ] 00:05:50.681 [2024-10-08 09:11:42.139236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.681 [2024-10-08 09:11:42.250414] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.681 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.056 [2024-10-08T09:11:43.739Z] ====================================== 00:05:52.056 [2024-10-08T09:11:43.739Z] busy:2210185260 (cyc) 00:05:52.056 [2024-10-08T09:11:43.739Z] total_run_count: 341000 00:05:52.056 [2024-10-08T09:11:43.739Z] tsc_hz: 2200000000 (cyc) 00:05:52.056 [2024-10-08T09:11:43.739Z] ====================================== 00:05:52.056 [2024-10-08T09:11:43.739Z] poller_cost: 6481 (cyc), 2945 (nsec) 00:05:52.056 00:05:52.056 real 0m1.357s 00:05:52.056 user 0m1.187s 00:05:52.056 sys 0m0.062s 00:05:52.056 09:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.056 ************************************ 00:05:52.056 END TEST thread_poller_perf 00:05:52.056 ************************************ 00:05:52.056 09:11:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.056 09:11:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.056 09:11:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:52.056 09:11:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.056 09:11:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.056 ************************************ 00:05:52.056 START TEST thread_poller_perf 00:05:52.056 ************************************ 00:05:52.056 09:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.056 [2024-10-08 09:11:43.404014] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:52.056 [2024-10-08 09:11:43.404111] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59496 ] 00:05:52.056 [2024-10-08 09:11:43.540910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.056 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:52.056 [2024-10-08 09:11:43.632598] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.432 [2024-10-08T09:11:45.115Z] ====================================== 00:05:53.432 [2024-10-08T09:11:45.115Z] busy:2202573472 (cyc) 00:05:53.432 [2024-10-08T09:11:45.115Z] total_run_count: 4493000 00:05:53.432 [2024-10-08T09:11:45.115Z] tsc_hz: 2200000000 (cyc) 00:05:53.432 [2024-10-08T09:11:45.115Z] ====================================== 00:05:53.433 [2024-10-08T09:11:45.116Z] poller_cost: 490 (cyc), 222 (nsec) 00:05:53.433 00:05:53.433 real 0m1.325s 00:05:53.433 user 0m1.165s 00:05:53.433 sys 0m0.051s 00:05:53.433 ************************************ 00:05:53.433 END TEST thread_poller_perf 00:05:53.433 ************************************ 00:05:53.433 09:11:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.433 09:11:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.433 09:11:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:53.433 ************************************ 00:05:53.433 END TEST thread 00:05:53.433 ************************************ 00:05:53.433 00:05:53.433 real 0m2.975s 00:05:53.433 user 0m2.497s 00:05:53.433 sys 0m0.262s 00:05:53.433 09:11:44 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.433 09:11:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.433 09:11:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:53.433 09:11:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:53.433 09:11:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.433 09:11:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.433 09:11:44 -- common/autotest_common.sh@10 -- # set +x 00:05:53.433 ************************************ 00:05:53.433 START TEST app_cmdline 00:05:53.433 ************************************ 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:53.433 * Looking for test storage... 00:05:53.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.433 09:11:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.433 --rc genhtml_branch_coverage=1 00:05:53.433 --rc genhtml_function_coverage=1 00:05:53.433 --rc genhtml_legend=1 00:05:53.433 --rc geninfo_all_blocks=1 00:05:53.433 --rc geninfo_unexecuted_blocks=1 00:05:53.433 00:05:53.433 ' 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.433 --rc genhtml_branch_coverage=1 00:05:53.433 --rc genhtml_function_coverage=1 00:05:53.433 --rc genhtml_legend=1 00:05:53.433 --rc geninfo_all_blocks=1 00:05:53.433 --rc geninfo_unexecuted_blocks=1 00:05:53.433 00:05:53.433 ' 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.433 --rc genhtml_branch_coverage=1 00:05:53.433 --rc genhtml_function_coverage=1 00:05:53.433 --rc genhtml_legend=1 00:05:53.433 --rc geninfo_all_blocks=1 00:05:53.433 --rc geninfo_unexecuted_blocks=1 00:05:53.433 00:05:53.433 ' 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.433 --rc genhtml_branch_coverage=1 00:05:53.433 --rc genhtml_function_coverage=1 00:05:53.433 --rc genhtml_legend=1 00:05:53.433 --rc geninfo_all_blocks=1 00:05:53.433 --rc geninfo_unexecuted_blocks=1 00:05:53.433 00:05:53.433 ' 00:05:53.433 09:11:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:53.433 09:11:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59580 00:05:53.433 09:11:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:53.433 09:11:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59580 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59580 ']' 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.433 09:11:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.433 [2024-10-08 09:11:45.046667] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:53.433 [2024-10-08 09:11:45.046778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:05:53.696 [2024-10-08 09:11:45.179579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.696 [2024-10-08 09:11:45.287267] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.696 [2024-10-08 09:11:45.358099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.630 09:11:46 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.630 09:11:46 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:54.630 09:11:46 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:54.630 { 00:05:54.630 "version": "SPDK v25.01-pre git sha1 91fca59bc", 00:05:54.630 "fields": { 00:05:54.630 "major": 25, 00:05:54.630 "minor": 1, 00:05:54.630 "patch": 0, 00:05:54.630 "suffix": "-pre", 00:05:54.630 "commit": "91fca59bc" 00:05:54.630 } 00:05:54.630 } 00:05:54.630 09:11:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:54.630 09:11:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:54.630 09:11:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:54.631 09:11:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:54.631 09:11:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:54.631 09:11:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:54.631 09:11:46 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.631 09:11:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:54.631 09:11:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.631 09:11:46 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.888 09:11:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:54.888 09:11:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:54.888 09:11:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.888 09:11:46 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:54.888 09:11:46 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.888 09:11:46 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.888 09:11:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.888 09:11:46 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.889 09:11:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.889 09:11:46 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.889 09:11:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.889 09:11:46 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:54.889 09:11:46 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:54.889 09:11:46 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.147 request: 00:05:55.147 { 00:05:55.147 "method": "env_dpdk_get_mem_stats", 00:05:55.147 "req_id": 1 00:05:55.147 } 00:05:55.147 Got JSON-RPC error response 00:05:55.147 response: 00:05:55.147 { 00:05:55.147 "code": -32601, 00:05:55.147 "message": "Method not found" 00:05:55.147 } 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.147 09:11:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59580 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59580 ']' 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59580 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59580 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.147 killing process with pid 59580 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59580' 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@969 -- # kill 59580 00:05:55.147 09:11:46 app_cmdline -- common/autotest_common.sh@974 -- # wait 59580 00:05:55.714 00:05:55.714 real 0m2.361s 00:05:55.714 user 0m2.953s 00:05:55.714 sys 0m0.522s 00:05:55.714 09:11:47 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.714 ************************************ 00:05:55.714 END TEST app_cmdline 00:05:55.714 ************************************ 00:05:55.714 09:11:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.714 09:11:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:55.714 09:11:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.714 09:11:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.714 09:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.714 ************************************ 00:05:55.714 START TEST version 00:05:55.714 ************************************ 00:05:55.714 09:11:47 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:55.714 * Looking for test storage... 00:05:55.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:55.714 09:11:47 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.714 09:11:47 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.714 09:11:47 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.714 09:11:47 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.714 09:11:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.714 09:11:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.714 09:11:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.714 09:11:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.714 09:11:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.973 09:11:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.973 09:11:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.973 09:11:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.973 09:11:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.973 09:11:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.973 09:11:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.973 09:11:47 version -- scripts/common.sh@344 -- # case "$op" in 00:05:55.973 09:11:47 version -- scripts/common.sh@345 -- # : 1 00:05:55.973 09:11:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.973 09:11:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.973 09:11:47 version -- scripts/common.sh@365 -- # decimal 1 00:05:55.973 09:11:47 version -- scripts/common.sh@353 -- # local d=1 00:05:55.973 09:11:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.973 09:11:47 version -- scripts/common.sh@355 -- # echo 1 00:05:55.973 09:11:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.973 09:11:47 version -- scripts/common.sh@366 -- # decimal 2 00:05:55.973 09:11:47 version -- scripts/common.sh@353 -- # local d=2 00:05:55.973 09:11:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.973 09:11:47 version -- scripts/common.sh@355 -- # echo 2 00:05:55.973 09:11:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.973 09:11:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.973 09:11:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.973 09:11:47 version -- scripts/common.sh@368 -- # return 0 00:05:55.973 09:11:47 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.973 09:11:47 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.973 --rc genhtml_branch_coverage=1 00:05:55.973 --rc genhtml_function_coverage=1 00:05:55.973 --rc genhtml_legend=1 00:05:55.973 --rc geninfo_all_blocks=1 00:05:55.973 --rc geninfo_unexecuted_blocks=1 00:05:55.973 00:05:55.973 ' 00:05:55.973 09:11:47 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.973 --rc genhtml_branch_coverage=1 00:05:55.973 --rc genhtml_function_coverage=1 00:05:55.973 --rc genhtml_legend=1 00:05:55.973 --rc geninfo_all_blocks=1 00:05:55.973 --rc geninfo_unexecuted_blocks=1 00:05:55.973 00:05:55.973 ' 00:05:55.974 09:11:47 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.974 --rc genhtml_branch_coverage=1 00:05:55.974 --rc genhtml_function_coverage=1 00:05:55.974 --rc genhtml_legend=1 00:05:55.974 --rc geninfo_all_blocks=1 00:05:55.974 --rc geninfo_unexecuted_blocks=1 00:05:55.974 00:05:55.974 ' 00:05:55.974 09:11:47 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.974 --rc genhtml_branch_coverage=1 00:05:55.974 --rc genhtml_function_coverage=1 00:05:55.974 --rc genhtml_legend=1 00:05:55.974 --rc geninfo_all_blocks=1 00:05:55.974 --rc geninfo_unexecuted_blocks=1 00:05:55.974 00:05:55.974 ' 00:05:55.974 09:11:47 version -- app/version.sh@17 -- # get_header_version major 00:05:55.974 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.974 09:11:47 version -- app/version.sh@17 -- # major=25 00:05:55.974 09:11:47 version -- app/version.sh@18 -- # get_header_version minor 00:05:55.974 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.974 09:11:47 version -- app/version.sh@18 -- # minor=1 00:05:55.974 09:11:47 version -- app/version.sh@19 -- # get_header_version patch 00:05:55.974 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.974 09:11:47 version -- app/version.sh@19 -- # patch=0 00:05:55.974 09:11:47 version -- app/version.sh@20 -- # get_header_version suffix 00:05:55.974 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.974 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.974 09:11:47 version -- app/version.sh@20 -- # suffix=-pre 00:05:55.974 09:11:47 version -- app/version.sh@22 -- # version=25.1 00:05:55.974 09:11:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:55.974 09:11:47 version -- app/version.sh@28 -- # version=25.1rc0 00:05:55.974 09:11:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:55.974 09:11:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:55.974 09:11:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:55.974 09:11:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:55.974 00:05:55.974 real 0m0.275s 00:05:55.974 user 0m0.180s 00:05:55.974 sys 0m0.137s 00:05:55.974 09:11:47 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.974 ************************************ 00:05:55.974 END TEST version 00:05:55.974 ************************************ 00:05:55.974 09:11:47 version -- common/autotest_common.sh@10 -- # set +x 00:05:55.974 09:11:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:55.974 09:11:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:55.974 09:11:47 -- spdk/autotest.sh@194 -- # uname -s 00:05:55.974 09:11:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:55.974 09:11:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:55.974 09:11:47 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:55.974 09:11:47 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:55.974 09:11:47 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:55.974 09:11:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.974 09:11:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.974 09:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.974 ************************************ 00:05:55.974 START TEST spdk_dd 00:05:55.974 ************************************ 00:05:55.974 09:11:47 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:55.974 * Looking for test storage... 00:05:55.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:55.974 09:11:47 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.974 09:11:47 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.974 09:11:47 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.233 09:11:47 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:56.233 09:11:47 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.233 09:11:47 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.233 --rc genhtml_branch_coverage=1 00:05:56.233 --rc genhtml_function_coverage=1 00:05:56.233 --rc genhtml_legend=1 00:05:56.233 --rc geninfo_all_blocks=1 00:05:56.233 --rc geninfo_unexecuted_blocks=1 00:05:56.233 00:05:56.233 ' 00:05:56.233 09:11:47 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.233 --rc genhtml_branch_coverage=1 00:05:56.233 --rc genhtml_function_coverage=1 00:05:56.233 --rc genhtml_legend=1 00:05:56.233 --rc geninfo_all_blocks=1 00:05:56.233 --rc geninfo_unexecuted_blocks=1 00:05:56.233 00:05:56.233 ' 00:05:56.233 09:11:47 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.233 --rc genhtml_branch_coverage=1 00:05:56.233 --rc genhtml_function_coverage=1 00:05:56.233 --rc genhtml_legend=1 00:05:56.233 --rc geninfo_all_blocks=1 00:05:56.233 --rc geninfo_unexecuted_blocks=1 00:05:56.233 00:05:56.233 ' 00:05:56.233 09:11:47 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.233 --rc genhtml_branch_coverage=1 00:05:56.233 --rc genhtml_function_coverage=1 00:05:56.233 --rc genhtml_legend=1 00:05:56.233 --rc geninfo_all_blocks=1 00:05:56.233 --rc geninfo_unexecuted_blocks=1 00:05:56.233 00:05:56.233 ' 00:05:56.233 09:11:47 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.233 09:11:47 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.233 09:11:47 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 09:11:47 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 09:11:47 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 09:11:47 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:56.233 09:11:47 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 09:11:47 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.493 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.493 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.493 09:11:48 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:56.493 09:11:48 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:56.493 09:11:48 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:56.753 09:11:48 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:56.753 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.15.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.2 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:56.754 * spdk_dd linked to liburing 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:56.754 09:11:48 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:56.754 09:11:48 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:56.754 09:11:48 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:56.754 09:11:48 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:56.754 09:11:48 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:56.754 09:11:48 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:56.754 09:11:48 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:05:56.755 09:11:48 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:05:56.755 09:11:48 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:56.755 09:11:48 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:56.755 09:11:48 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:56.755 09:11:48 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:56.755 09:11:48 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:56.755 09:11:48 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:56.755 09:11:48 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:56.755 09:11:48 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.755 09:11:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:56.755 ************************************ 00:05:56.755 START TEST spdk_dd_basic_rw 00:05:56.755 ************************************ 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:56.755 * Looking for test storage... 00:05:56.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.755 --rc genhtml_branch_coverage=1 00:05:56.755 --rc genhtml_function_coverage=1 00:05:56.755 --rc genhtml_legend=1 00:05:56.755 --rc geninfo_all_blocks=1 00:05:56.755 --rc geninfo_unexecuted_blocks=1 00:05:56.755 00:05:56.755 ' 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.755 --rc genhtml_branch_coverage=1 00:05:56.755 --rc genhtml_function_coverage=1 00:05:56.755 --rc genhtml_legend=1 00:05:56.755 --rc geninfo_all_blocks=1 00:05:56.755 --rc geninfo_unexecuted_blocks=1 00:05:56.755 00:05:56.755 ' 00:05:56.755 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.755 --rc genhtml_branch_coverage=1 00:05:56.755 --rc genhtml_function_coverage=1 00:05:56.755 --rc genhtml_legend=1 00:05:56.755 --rc geninfo_all_blocks=1 00:05:56.755 --rc geninfo_unexecuted_blocks=1 00:05:56.755 00:05:56.755 ' 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.756 --rc genhtml_branch_coverage=1 00:05:56.756 --rc genhtml_function_coverage=1 00:05:56.756 --rc genhtml_legend=1 00:05:56.756 --rc geninfo_all_blocks=1 00:05:56.756 --rc geninfo_unexecuted_blocks=1 00:05:56.756 00:05:56.756 ' 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:56.756 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:57.016 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 4 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:57.016 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 4 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.017 ************************************ 00:05:57.017 START TEST dd_bs_lt_native_bs 00:05:57.017 ************************************ 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:57.017 09:11:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:57.017 { 00:05:57.017 "subsystems": [ 00:05:57.017 { 00:05:57.017 "subsystem": "bdev", 00:05:57.017 "config": [ 00:05:57.017 { 00:05:57.017 "params": { 00:05:57.017 "trtype": "pcie", 00:05:57.017 "traddr": "0000:00:10.0", 00:05:57.017 "name": "Nvme0" 00:05:57.017 }, 00:05:57.017 "method": "bdev_nvme_attach_controller" 00:05:57.017 }, 00:05:57.017 { 00:05:57.017 "method": "bdev_wait_for_examine" 00:05:57.017 } 00:05:57.017 ] 00:05:57.017 } 00:05:57.017 ] 00:05:57.017 } 00:05:57.017 [2024-10-08 09:11:48.679786] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:57.017 [2024-10-08 09:11:48.679901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:05:57.276 [2024-10-08 09:11:48.816456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.276 [2024-10-08 09:11:48.912838] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.534 [2024-10-08 09:11:48.971662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.534 [2024-10-08 09:11:49.081902] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:57.534 [2024-10-08 09:11:49.081969] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.534 [2024-10-08 09:11:49.201470] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.793 00:05:57.793 real 0m0.655s 00:05:57.793 user 0m0.443s 00:05:57.793 sys 0m0.168s 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:57.793 ************************************ 00:05:57.793 END TEST dd_bs_lt_native_bs 00:05:57.793 ************************************ 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.793 ************************************ 00:05:57.793 START TEST dd_rw 00:05:57.793 ************************************ 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:57.793 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:58.359 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:58.359 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.359 09:11:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 { 00:05:58.359 "subsystems": [ 00:05:58.359 { 00:05:58.359 "subsystem": "bdev", 00:05:58.359 "config": [ 00:05:58.359 { 00:05:58.359 "params": { 00:05:58.359 "trtype": "pcie", 00:05:58.359 "traddr": "0000:00:10.0", 00:05:58.359 "name": "Nvme0" 00:05:58.359 }, 00:05:58.359 "method": "bdev_nvme_attach_controller" 00:05:58.359 }, 00:05:58.359 { 00:05:58.359 "method": "bdev_wait_for_examine" 00:05:58.359 } 00:05:58.359 ] 00:05:58.359 } 00:05:58.359 ] 00:05:58.359 } 00:05:58.359 [2024-10-08 09:11:49.997640] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:58.359 [2024-10-08 09:11:49.997753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59968 ] 00:05:58.617 [2024-10-08 09:11:50.134869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.617 [2024-10-08 09:11:50.212330] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.617 [2024-10-08 09:11:50.265011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.875  [2024-10-08T09:11:50.817Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:59.134 00:05:59.134 09:11:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:59.134 09:11:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:59.134 09:11:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.134 09:11:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.134 { 00:05:59.134 "subsystems": [ 00:05:59.134 { 00:05:59.134 "subsystem": "bdev", 00:05:59.134 "config": [ 00:05:59.134 { 00:05:59.134 "params": { 00:05:59.134 "trtype": "pcie", 00:05:59.134 "traddr": "0000:00:10.0", 00:05:59.134 "name": "Nvme0" 00:05:59.134 }, 00:05:59.134 "method": "bdev_nvme_attach_controller" 00:05:59.134 }, 00:05:59.134 { 00:05:59.134 "method": "bdev_wait_for_examine" 00:05:59.134 } 00:05:59.134 ] 00:05:59.134 } 00:05:59.134 ] 00:05:59.134 } 00:05:59.134 [2024-10-08 09:11:50.626531] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:59.134 [2024-10-08 09:11:50.626627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59976 ] 00:05:59.134 [2024-10-08 09:11:50.765716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.393 [2024-10-08 09:11:50.873512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.393 [2024-10-08 09:11:50.931685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.393  [2024-10-08T09:11:51.335Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:59.652 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.652 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.652 [2024-10-08 09:11:51.329065] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:59.652 [2024-10-08 09:11:51.329171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59997 ] 00:05:59.652 { 00:05:59.652 "subsystems": [ 00:05:59.652 { 00:05:59.652 "subsystem": "bdev", 00:05:59.652 "config": [ 00:05:59.652 { 00:05:59.652 "params": { 00:05:59.652 "trtype": "pcie", 00:05:59.652 "traddr": "0000:00:10.0", 00:05:59.652 "name": "Nvme0" 00:05:59.652 }, 00:05:59.652 "method": "bdev_nvme_attach_controller" 00:05:59.652 }, 00:05:59.652 { 00:05:59.652 "method": "bdev_wait_for_examine" 00:05:59.652 } 00:05:59.652 ] 00:05:59.652 } 00:05:59.652 ] 00:05:59.652 } 00:05:59.910 [2024-10-08 09:11:51.465894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.910 [2024-10-08 09:11:51.575780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.169 [2024-10-08 09:11:51.628283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.169  [2024-10-08T09:11:52.110Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:00.427 00:06:00.427 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:00.427 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:00.427 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:00.427 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:00.427 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:00.427 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:00.427 09:11:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.993 09:11:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:00.993 09:11:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:00.993 09:11:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.993 09:11:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.993 { 00:06:00.993 "subsystems": [ 00:06:00.993 { 00:06:00.993 "subsystem": "bdev", 00:06:00.993 "config": [ 00:06:00.993 { 00:06:00.993 "params": { 00:06:00.993 "trtype": "pcie", 00:06:00.993 "traddr": "0000:00:10.0", 00:06:00.993 "name": "Nvme0" 00:06:00.993 }, 00:06:00.993 "method": "bdev_nvme_attach_controller" 00:06:00.993 }, 00:06:00.993 { 00:06:00.993 "method": "bdev_wait_for_examine" 00:06:00.993 } 00:06:00.993 ] 00:06:00.993 } 00:06:00.993 ] 00:06:00.993 } 00:06:00.993 [2024-10-08 09:11:52.594409] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:00.993 [2024-10-08 09:11:52.595221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60016 ] 00:06:01.252 [2024-10-08 09:11:52.732302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.252 [2024-10-08 09:11:52.847592] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.252 [2024-10-08 09:11:52.905340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.510  [2024-10-08T09:11:53.451Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:01.768 00:06:01.769 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:01.769 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:01.769 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.769 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.769 { 00:06:01.769 "subsystems": [ 00:06:01.769 { 00:06:01.769 "subsystem": "bdev", 00:06:01.769 "config": [ 00:06:01.769 { 00:06:01.769 "params": { 00:06:01.769 "trtype": "pcie", 00:06:01.769 "traddr": "0000:00:10.0", 00:06:01.769 "name": "Nvme0" 00:06:01.769 }, 00:06:01.769 "method": "bdev_nvme_attach_controller" 00:06:01.769 }, 00:06:01.769 { 00:06:01.769 "method": "bdev_wait_for_examine" 00:06:01.769 } 00:06:01.769 ] 00:06:01.769 } 00:06:01.769 ] 00:06:01.769 } 00:06:01.769 [2024-10-08 09:11:53.308867] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:01.769 [2024-10-08 09:11:53.308996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60035 ] 00:06:01.769 [2024-10-08 09:11:53.446912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.027 [2024-10-08 09:11:53.552535] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.027 [2024-10-08 09:11:53.610526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.285  [2024-10-08T09:11:53.968Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:02.285 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.285 09:11:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.543 [2024-10-08 09:11:54.006775] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:02.543 [2024-10-08 09:11:54.006873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60058 ] 00:06:02.543 { 00:06:02.543 "subsystems": [ 00:06:02.543 { 00:06:02.543 "subsystem": "bdev", 00:06:02.543 "config": [ 00:06:02.543 { 00:06:02.543 "params": { 00:06:02.543 "trtype": "pcie", 00:06:02.543 "traddr": "0000:00:10.0", 00:06:02.543 "name": "Nvme0" 00:06:02.543 }, 00:06:02.543 "method": "bdev_nvme_attach_controller" 00:06:02.543 }, 00:06:02.543 { 00:06:02.543 "method": "bdev_wait_for_examine" 00:06:02.543 } 00:06:02.543 ] 00:06:02.543 } 00:06:02.543 ] 00:06:02.543 } 00:06:02.543 [2024-10-08 09:11:54.146560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.801 [2024-10-08 09:11:54.257367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.801 [2024-10-08 09:11:54.311637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.801  [2024-10-08T09:11:54.742Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:03.059 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:03.059 09:11:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.626 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:03.626 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:03.626 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.626 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.626 { 00:06:03.626 "subsystems": [ 00:06:03.626 { 00:06:03.626 "subsystem": "bdev", 00:06:03.626 "config": [ 00:06:03.626 { 00:06:03.626 "params": { 00:06:03.626 "trtype": "pcie", 00:06:03.626 "traddr": "0000:00:10.0", 00:06:03.626 "name": "Nvme0" 00:06:03.626 }, 00:06:03.626 "method": "bdev_nvme_attach_controller" 00:06:03.626 }, 00:06:03.626 { 00:06:03.626 "method": "bdev_wait_for_examine" 00:06:03.626 } 00:06:03.626 ] 00:06:03.626 } 00:06:03.626 ] 00:06:03.626 } 00:06:03.626 [2024-10-08 09:11:55.180296] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:03.626 [2024-10-08 09:11:55.180410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60077 ] 00:06:03.885 [2024-10-08 09:11:55.316563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.885 [2024-10-08 09:11:55.435957] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.885 [2024-10-08 09:11:55.492528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.143  [2024-10-08T09:11:55.826Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:04.143 00:06:04.143 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:04.143 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:04.143 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.143 09:11:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.402 { 00:06:04.402 "subsystems": [ 00:06:04.402 { 00:06:04.402 "subsystem": "bdev", 00:06:04.402 "config": [ 00:06:04.402 { 00:06:04.402 "params": { 00:06:04.402 "trtype": "pcie", 00:06:04.402 "traddr": "0000:00:10.0", 00:06:04.402 "name": "Nvme0" 00:06:04.402 }, 00:06:04.402 "method": "bdev_nvme_attach_controller" 00:06:04.402 }, 00:06:04.402 { 00:06:04.402 "method": "bdev_wait_for_examine" 00:06:04.402 } 00:06:04.402 ] 00:06:04.402 } 00:06:04.402 ] 00:06:04.402 } 00:06:04.402 [2024-10-08 09:11:55.864279] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:04.402 [2024-10-08 09:11:55.864391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60085 ] 00:06:04.402 [2024-10-08 09:11:56.002682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.664 [2024-10-08 09:11:56.104219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.664 [2024-10-08 09:11:56.161431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.665  [2024-10-08T09:11:56.612Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:04.929 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.929 09:11:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.929 [2024-10-08 09:11:56.571930] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:04.929 [2024-10-08 09:11:56.572059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60106 ] 00:06:04.929 { 00:06:04.929 "subsystems": [ 00:06:04.929 { 00:06:04.929 "subsystem": "bdev", 00:06:04.929 "config": [ 00:06:04.929 { 00:06:04.929 "params": { 00:06:04.929 "trtype": "pcie", 00:06:04.929 "traddr": "0000:00:10.0", 00:06:04.929 "name": "Nvme0" 00:06:04.929 }, 00:06:04.929 "method": "bdev_nvme_attach_controller" 00:06:04.929 }, 00:06:04.929 { 00:06:04.929 "method": "bdev_wait_for_examine" 00:06:04.929 } 00:06:04.929 ] 00:06:04.929 } 00:06:04.929 ] 00:06:04.929 } 00:06:05.187 [2024-10-08 09:11:56.710355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.187 [2024-10-08 09:11:56.810691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.187 [2024-10-08 09:11:56.867020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.444  [2024-10-08T09:11:57.385Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:05.702 00:06:05.702 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:05.702 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:05.702 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:05.702 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:05.702 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:05.702 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:05.702 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.268 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:06.268 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:06.268 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.268 09:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.268 [2024-10-08 09:11:57.770472] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:06.268 [2024-10-08 09:11:57.770574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:06:06.268 { 00:06:06.268 "subsystems": [ 00:06:06.268 { 00:06:06.268 "subsystem": "bdev", 00:06:06.268 "config": [ 00:06:06.268 { 00:06:06.268 "params": { 00:06:06.268 "trtype": "pcie", 00:06:06.268 "traddr": "0000:00:10.0", 00:06:06.268 "name": "Nvme0" 00:06:06.268 }, 00:06:06.268 "method": "bdev_nvme_attach_controller" 00:06:06.268 }, 00:06:06.268 { 00:06:06.268 "method": "bdev_wait_for_examine" 00:06:06.268 } 00:06:06.268 ] 00:06:06.268 } 00:06:06.268 ] 00:06:06.268 } 00:06:06.268 [2024-10-08 09:11:57.911371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.526 [2024-10-08 09:11:58.028266] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.526 [2024-10-08 09:11:58.086975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.526  [2024-10-08T09:11:58.467Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:06.784 00:06:06.784 09:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:06.784 09:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:06.784 09:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.784 09:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.784 { 00:06:06.784 "subsystems": [ 00:06:06.784 { 00:06:06.784 "subsystem": "bdev", 00:06:06.784 "config": [ 00:06:06.784 { 00:06:06.784 "params": { 00:06:06.784 "trtype": "pcie", 00:06:06.784 "traddr": "0000:00:10.0", 00:06:06.784 "name": "Nvme0" 00:06:06.784 }, 00:06:06.784 "method": "bdev_nvme_attach_controller" 00:06:06.784 }, 00:06:06.784 { 00:06:06.784 "method": "bdev_wait_for_examine" 00:06:06.784 } 00:06:06.784 ] 00:06:06.784 } 00:06:06.784 ] 00:06:06.784 } 00:06:07.042 [2024-10-08 09:11:58.477470] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:07.042 [2024-10-08 09:11:58.477598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60144 ] 00:06:07.042 [2024-10-08 09:11:58.616360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.042 [2024-10-08 09:11:58.692376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.301 [2024-10-08 09:11:58.746063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.301  [2024-10-08T09:11:59.242Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:07.559 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.559 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.559 { 00:06:07.559 "subsystems": [ 00:06:07.559 { 00:06:07.559 "subsystem": "bdev", 00:06:07.559 "config": [ 00:06:07.559 { 00:06:07.559 "params": { 00:06:07.559 "trtype": "pcie", 00:06:07.559 "traddr": "0000:00:10.0", 00:06:07.559 "name": "Nvme0" 00:06:07.559 }, 00:06:07.559 "method": "bdev_nvme_attach_controller" 00:06:07.559 }, 00:06:07.559 { 00:06:07.559 "method": "bdev_wait_for_examine" 00:06:07.559 } 00:06:07.559 ] 00:06:07.559 } 00:06:07.559 ] 00:06:07.559 } 00:06:07.559 [2024-10-08 09:11:59.124850] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:07.559 [2024-10-08 09:11:59.125442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:06:07.818 [2024-10-08 09:11:59.263348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.818 [2024-10-08 09:11:59.363496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.818 [2024-10-08 09:11:59.422192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.077  [2024-10-08T09:12:00.019Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:08.336 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:08.336 09:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.594 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:08.594 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:08.594 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.594 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.594 { 00:06:08.594 "subsystems": [ 00:06:08.594 { 00:06:08.594 "subsystem": "bdev", 00:06:08.594 "config": [ 00:06:08.594 { 00:06:08.594 "params": { 00:06:08.594 "trtype": "pcie", 00:06:08.594 "traddr": "0000:00:10.0", 00:06:08.594 "name": "Nvme0" 00:06:08.594 }, 00:06:08.594 "method": "bdev_nvme_attach_controller" 00:06:08.594 }, 00:06:08.594 { 00:06:08.594 "method": "bdev_wait_for_examine" 00:06:08.594 } 00:06:08.594 ] 00:06:08.594 } 00:06:08.594 ] 00:06:08.594 } 00:06:08.594 [2024-10-08 09:12:00.262168] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:08.594 [2024-10-08 09:12:00.262275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60184 ] 00:06:08.853 [2024-10-08 09:12:00.402854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.853 [2024-10-08 09:12:00.498686] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.112 [2024-10-08 09:12:00.555532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.112  [2024-10-08T09:12:01.053Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:09.370 00:06:09.370 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:09.370 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:09.370 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.370 09:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.370 { 00:06:09.370 "subsystems": [ 00:06:09.370 { 00:06:09.370 "subsystem": "bdev", 00:06:09.370 "config": [ 00:06:09.370 { 00:06:09.370 "params": { 00:06:09.370 "trtype": "pcie", 00:06:09.370 "traddr": "0000:00:10.0", 00:06:09.370 "name": "Nvme0" 00:06:09.370 }, 00:06:09.370 "method": "bdev_nvme_attach_controller" 00:06:09.370 }, 00:06:09.370 { 00:06:09.370 "method": "bdev_wait_for_examine" 00:06:09.370 } 00:06:09.370 ] 00:06:09.370 } 00:06:09.370 ] 00:06:09.370 } 00:06:09.370 [2024-10-08 09:12:00.936425] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:09.370 [2024-10-08 09:12:00.936537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60192 ] 00:06:09.629 [2024-10-08 09:12:01.070234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.629 [2024-10-08 09:12:01.165270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.629 [2024-10-08 09:12:01.222331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.888  [2024-10-08T09:12:01.831Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:10.148 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.149 09:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.149 [2024-10-08 09:12:01.648112] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:10.149 [2024-10-08 09:12:01.648228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:06:10.149 { 00:06:10.149 "subsystems": [ 00:06:10.149 { 00:06:10.149 "subsystem": "bdev", 00:06:10.149 "config": [ 00:06:10.149 { 00:06:10.149 "params": { 00:06:10.149 "trtype": "pcie", 00:06:10.149 "traddr": "0000:00:10.0", 00:06:10.149 "name": "Nvme0" 00:06:10.149 }, 00:06:10.149 "method": "bdev_nvme_attach_controller" 00:06:10.149 }, 00:06:10.149 { 00:06:10.149 "method": "bdev_wait_for_examine" 00:06:10.149 } 00:06:10.149 ] 00:06:10.149 } 00:06:10.149 ] 00:06:10.149 } 00:06:10.149 [2024-10-08 09:12:01.787350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.408 [2024-10-08 09:12:01.881715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.408 [2024-10-08 09:12:01.935795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.408  [2024-10-08T09:12:02.349Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:10.666 00:06:10.666 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:10.666 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:10.666 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:10.666 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:10.666 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:10.666 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:10.667 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.234 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:11.234 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:11.234 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.234 09:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.234 [2024-10-08 09:12:02.832930] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:11.234 [2024-10-08 09:12:02.833051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:06:11.234 { 00:06:11.234 "subsystems": [ 00:06:11.234 { 00:06:11.234 "subsystem": "bdev", 00:06:11.234 "config": [ 00:06:11.234 { 00:06:11.234 "params": { 00:06:11.234 "trtype": "pcie", 00:06:11.234 "traddr": "0000:00:10.0", 00:06:11.234 "name": "Nvme0" 00:06:11.234 }, 00:06:11.234 "method": "bdev_nvme_attach_controller" 00:06:11.234 }, 00:06:11.234 { 00:06:11.234 "method": "bdev_wait_for_examine" 00:06:11.234 } 00:06:11.234 ] 00:06:11.234 } 00:06:11.234 ] 00:06:11.234 } 00:06:11.493 [2024-10-08 09:12:02.971857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.493 [2024-10-08 09:12:03.064909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.493 [2024-10-08 09:12:03.118522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.752  [2024-10-08T09:12:03.694Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:12.011 00:06:12.011 09:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:12.011 09:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:12.011 09:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.011 09:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.011 [2024-10-08 09:12:03.528847] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:12.011 [2024-10-08 09:12:03.528976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:06:12.011 { 00:06:12.011 "subsystems": [ 00:06:12.011 { 00:06:12.011 "subsystem": "bdev", 00:06:12.011 "config": [ 00:06:12.011 { 00:06:12.011 "params": { 00:06:12.011 "trtype": "pcie", 00:06:12.011 "traddr": "0000:00:10.0", 00:06:12.011 "name": "Nvme0" 00:06:12.011 }, 00:06:12.011 "method": "bdev_nvme_attach_controller" 00:06:12.011 }, 00:06:12.011 { 00:06:12.011 "method": "bdev_wait_for_examine" 00:06:12.011 } 00:06:12.011 ] 00:06:12.011 } 00:06:12.011 ] 00:06:12.011 } 00:06:12.011 [2024-10-08 09:12:03.667465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.269 [2024-10-08 09:12:03.778184] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.269 [2024-10-08 09:12:03.831459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.269  [2024-10-08T09:12:04.211Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:12.528 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.528 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.788 [2024-10-08 09:12:04.232809] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:12.788 [2024-10-08 09:12:04.232920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60270 ] 00:06:12.788 { 00:06:12.788 "subsystems": [ 00:06:12.788 { 00:06:12.788 "subsystem": "bdev", 00:06:12.788 "config": [ 00:06:12.788 { 00:06:12.788 "params": { 00:06:12.788 "trtype": "pcie", 00:06:12.788 "traddr": "0000:00:10.0", 00:06:12.788 "name": "Nvme0" 00:06:12.788 }, 00:06:12.788 "method": "bdev_nvme_attach_controller" 00:06:12.788 }, 00:06:12.788 { 00:06:12.788 "method": "bdev_wait_for_examine" 00:06:12.788 } 00:06:12.788 ] 00:06:12.788 } 00:06:12.788 ] 00:06:12.788 } 00:06:12.788 [2024-10-08 09:12:04.368914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.788 [2024-10-08 09:12:04.469522] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.098 [2024-10-08 09:12:04.524292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.098  [2024-10-08T09:12:05.039Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:13.356 00:06:13.356 00:06:13.356 real 0m15.522s 00:06:13.356 user 0m11.324s 00:06:13.356 sys 0m5.709s 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.356 ************************************ 00:06:13.356 END TEST dd_rw 00:06:13.356 ************************************ 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 ************************************ 00:06:13.356 START TEST dd_rw_offset 00:06:13.356 ************************************ 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ikn89ll5ho0x8vntukhrbizbuzot42gr6cg0709c1fzy8phe2phgba62rypd85d5kbclji3qu17wq4jiz179l1dpddiddqvdmcq5a2pg5q2w4phmnb2ochkkyv4g26w0kh98wskck4hhfgdc9eo1dhw5fb0v31sjgm7y3itxswofokbber0sda2x92oa6tbry6hxevje45dsn6if6xyoh35d6158mzhva400zantfb8ewwjfru3zn5barjyhtuy58faf03hu347v3otc5vrbkhtpynp35ke21n098rmfcpeah5lyl7cfbif27wwb53mziz49y2zp43w4vqjuxcpfaatjny9u7u0sm3wgzb1e9a53ezup1aci6u3r0t4nw1ypwgi4slnsqmjjp80pj8tn8bp3eonk1vkdgnidbk007wgi4xzxd84da5qqut04pf5b2uq5cpp0q17k4a8enevk524p98gpoix2f6pd6ivk7wu8fgn343c7z20vranzz0qpmkib3ztrpgt6lcuotjarjcooc34rlkpfwskcr2cjksulhe9g3xl8c66ecqvt0qqdtxv9j59s9t680obtx9u3cbaw9al3dwrwmbcfhc4ct3begym81rew7q8980qzhkzvxmxn8an54e4jawddmjna4hlybkst9gwinoww9fubjp8blb6g52tbmn97vmbjrbshqoh528hukmzsuor8v9xve12xe2lbmx8v80rvski54cgcs7ngymewhyz8i8iow40eaa73hur8q5jm3xx0p738amnvv99ntrl7m45enbbl0e7ihf883p3e5ebjb8mtvkuz8bydekmh8thp75o7ig8x8divjdd2xc8cy2vutybifo9acsc0upklvziy3fv0rycacq4vmm6jdix0jhrzpuxy6a47rj2v9az87p6d7olm7p72x5t1vb6c0riw5bgohca2lvdzw5zdw8lppn5m82zc5pfo0yld7yjeynnb891hpmfgtz6fxxzwz4zd3gxsf3ofynah53rj1siut8jao9lxus2v5lfcsv0o7b5uhgtg8h8lk18phcl9xsq2pn2eyjwolpip8dbmysmmf9qwyycz2ezlzgqoea9sctadxb2zfmqmay9jovknt9u44ffz9v3tzpqa9j2vy9cxhcslx4cbkqs4l4kyrqj9l2h488bv6s19jwm5321plr225kkdzgc0yfljtjicayqvydktssmlmrrherl1uj626vd1x4popazhw21b5orrzefvuld95phenmcm2rteeytbfklcv8k4vx4gi2xnevkltps6n54qpbcuir3qej6l6tsextadvxemp1e4gor0ng2oq92jl0gei704yiw5942sdvdh8pntfxyxcij8ncplygz0pit5aio13inns133z43xe6xhpqlq6tpw1mewnq1qqx9mgmwhz6h5niedbrs6who7bivfutt75vht2o87fmbmfi9j9f023685275vhon1bwnlii4zow06uo40q1htyey4exccj1sqn95j5dbrsc3xddlqs2z6rkjjcm2pfoixe27uczwhoj571x5t0j9dlhknzpcj3t6gpp9dxj76aq32tq1raohskwsb64wevqe7bz1r0rz9lv82dgkorztez6vv4k2cgdoek1e8di67j866r9dlzmzbcokymfstaof6r5xh0hu9e53i9q1cq4g4ysf6l1or4p55zv2kdsyst5qbk7f3zv07fbpjqa2q84reqnnu0oa3ka92qo4b06vps7ahzp5uc91d5tzuvakf68t6gn7f9s82z4ei9ece5y2o0ptbcb3sfu29l3rwk093vkde8nanpnbvgpqkg6tv08m0nc0wnna8k2mp1fls59l9hz7cbh4i98l27belijlr67ohw5yi6j6ijliaexgokjzl7zejssfunshdflys334asfzz4lr24lwpepts3f9txfvn1yia6gxp1famzzvi31nyojyfb8l9putevshcfzx7ufh0e9sqknhmnqefya64q67riaco9lfmc5o0falx62hxll2zavrvdsfkdcvdv8ok85aj1qzb49vvw2ld9koiottzswfpieypjns9y261xt1tqesrqrud2kp1nke29imjg5py1m3pl35j55ev4qs6ay2p50j5o56tq9ejk46i6zzc3eogu1sq54a98165qs9ucq2yh5hik586ehttifuzniauybpsw5gmprqaj02phy308dzb0ogt9z9ho62zqpf1czdbg4p4ew8v0cnu3ktq0js7zfrhaw4j70l9qrugfpp9oih1eaaspse31fckrq1m9ayew9yeshpl8hxp69hj7ftogb52s1zd9ls66ljhwbocj6855cf58rqyoq50gabprgzvbqnvhf0l7bmu12bui7clfotifqfwhp2rb2b29dlcm1b5khv7mfj1ptp80oein9gyge3nznflv1ynzxjtd6gzu9157j247u3c5ynkwz6c6oze8oodebdca4hxeiol81bgcinhpjdgvrowalwwh7yz34jp5j3pyi9d2wkanzcjevehakkn36kyov6agnvyye4selr65gz6ljrdsohpdolnuald3of3g4tmas2icxult1z53z0dhpfkjhl72y1g8o25fz4fakchvp51hhccihtrsju1qvkdq1kyaw8t8a4tbi1sarmnqepqiolxix4uinkwl3bcujeppt5lgafewa6m282q42667hkvi99l670ixwbiirixpnu2vdmp0xgern2vlxnzw8erdzg1hvkqsuzgtmjapu71pvvhm4bohptwkem4fi09hifnsompxmayaewlq47bme6dp3wz471k46w21qtbjmokti2ww7u6y83uq7uyufatv8z0oj8jzb4rp19ry9zohrrae9z6o5ks2dcm7mb5ohuizpledu87dptjppzks5o6l0qljvukybeth2nv4wbynr67vxxo1jo6737fictk1uaesebyi3vye20bid790cxa9zm951uxxojmx227wjujix8czq4xym2xhgspha944rzf8t11qx42vvugp13yz41u67yftr3vjusnqza1qia7tec3pp7jqh6v1lbnoqjbbunvxtvqrnnz1v490w66vxfh46rz525n2nzcgc8050nmeebgtwfhcrne812u0esppwo3suulg3cy51yqby2q2gsgzngbn6jxpc30uijlsjoi6vzub9acfinfuf9aj8v7qutq3crc0cebd47wfxh5llnfhng5d16r10dedu1jevj7pv923clkywmcx6fynb9xlkhtfz6pgsxds306yzlza7zblfd5wc1vlbmd5u1fdz6kzxhmkeg4eupbk837scety7okhu8ubyxw7mjsb96fd59givt76qhuh0ln5hccwns1bja50qm8b0sbeu7da10037e0v1eeb6x95x0lcinhhxobm6kaitcthdx23cjsd55f8l49gak0yil4ezb17fkm7si31q6hcjw9ueemjpajw7mhxu36ro9zolsekohvxoqtk3890ypq7rjk8t33k4o5nh7prolfdodhvylmkdvcef2duj8ozzk6hzy1uhs3t7hjn7mawaboty8p5tgf7cs7ygzzzu6c9kimfhdfd7rs01wlqeczpm587jvt2fet3mplsxnjjctv4ub7orj4o944letc2wfqaqkwuxnx3igxvvassg8l43pb8nbpoa7ksva2d13pgrhyruncj5hmspe6zbucx3ku6ey25uzql3er1gffo77taimrd2j2utp1oto7r9jjo4ev9sc7h74qx0bwb02qzypx6lwsqda32rud0mdzfc2srn6sa7x23j62z85cj427d4flaq1p8wk3byggk0sart6eroaoxvyubl0av5nsdgoaejmn0wgbh1na82t8h9ihpdj7a75uehcfzsffyo0ne1rbooupsh7zweqlowwzyw3y9buxf2saon9jbpwvsa3fs9pn1fw5c12wyhuw0uabzbycs0o6v60p9gyur73gqxzdibjnzacvnz8zm9upjbsnyv6f480wsiqtyp7t4cze 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:13.356 09:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 { 00:06:13.356 "subsystems": [ 00:06:13.356 { 00:06:13.356 "subsystem": "bdev", 00:06:13.356 "config": [ 00:06:13.356 { 00:06:13.356 "params": { 00:06:13.356 "trtype": "pcie", 00:06:13.356 "traddr": "0000:00:10.0", 00:06:13.356 "name": "Nvme0" 00:06:13.356 }, 00:06:13.356 "method": "bdev_nvme_attach_controller" 00:06:13.356 }, 00:06:13.356 { 00:06:13.356 "method": "bdev_wait_for_examine" 00:06:13.356 } 00:06:13.356 ] 00:06:13.356 } 00:06:13.356 ] 00:06:13.356 } 00:06:13.356 [2024-10-08 09:12:05.027593] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:13.356 [2024-10-08 09:12:05.027700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60300 ] 00:06:13.614 [2024-10-08 09:12:05.165743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.614 [2024-10-08 09:12:05.258323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.872 [2024-10-08 09:12:05.312715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.873  [2024-10-08T09:12:05.814Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:14.131 00:06:14.131 09:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:14.132 09:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:14.132 09:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:14.132 09:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:14.132 { 00:06:14.132 "subsystems": [ 00:06:14.132 { 00:06:14.132 "subsystem": "bdev", 00:06:14.132 "config": [ 00:06:14.132 { 00:06:14.132 "params": { 00:06:14.132 "trtype": "pcie", 00:06:14.132 "traddr": "0000:00:10.0", 00:06:14.132 "name": "Nvme0" 00:06:14.132 }, 00:06:14.132 "method": "bdev_nvme_attach_controller" 00:06:14.132 }, 00:06:14.132 { 00:06:14.132 "method": "bdev_wait_for_examine" 00:06:14.132 } 00:06:14.132 ] 00:06:14.132 } 00:06:14.132 ] 00:06:14.132 } 00:06:14.132 [2024-10-08 09:12:05.704829] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:14.132 [2024-10-08 09:12:05.704940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60319 ] 00:06:14.390 [2024-10-08 09:12:05.843669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.390 [2024-10-08 09:12:05.951726] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.390 [2024-10-08 09:12:06.010278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.649  [2024-10-08T09:12:06.592Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:14.909 00:06:14.909 09:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ikn89ll5ho0x8vntukhrbizbuzot42gr6cg0709c1fzy8phe2phgba62rypd85d5kbclji3qu17wq4jiz179l1dpddiddqvdmcq5a2pg5q2w4phmnb2ochkkyv4g26w0kh98wskck4hhfgdc9eo1dhw5fb0v31sjgm7y3itxswofokbber0sda2x92oa6tbry6hxevje45dsn6if6xyoh35d6158mzhva400zantfb8ewwjfru3zn5barjyhtuy58faf03hu347v3otc5vrbkhtpynp35ke21n098rmfcpeah5lyl7cfbif27wwb53mziz49y2zp43w4vqjuxcpfaatjny9u7u0sm3wgzb1e9a53ezup1aci6u3r0t4nw1ypwgi4slnsqmjjp80pj8tn8bp3eonk1vkdgnidbk007wgi4xzxd84da5qqut04pf5b2uq5cpp0q17k4a8enevk524p98gpoix2f6pd6ivk7wu8fgn343c7z20vranzz0qpmkib3ztrpgt6lcuotjarjcooc34rlkpfwskcr2cjksulhe9g3xl8c66ecqvt0qqdtxv9j59s9t680obtx9u3cbaw9al3dwrwmbcfhc4ct3begym81rew7q8980qzhkzvxmxn8an54e4jawddmjna4hlybkst9gwinoww9fubjp8blb6g52tbmn97vmbjrbshqoh528hukmzsuor8v9xve12xe2lbmx8v80rvski54cgcs7ngymewhyz8i8iow40eaa73hur8q5jm3xx0p738amnvv99ntrl7m45enbbl0e7ihf883p3e5ebjb8mtvkuz8bydekmh8thp75o7ig8x8divjdd2xc8cy2vutybifo9acsc0upklvziy3fv0rycacq4vmm6jdix0jhrzpuxy6a47rj2v9az87p6d7olm7p72x5t1vb6c0riw5bgohca2lvdzw5zdw8lppn5m82zc5pfo0yld7yjeynnb891hpmfgtz6fxxzwz4zd3gxsf3ofynah53rj1siut8jao9lxus2v5lfcsv0o7b5uhgtg8h8lk18phcl9xsq2pn2eyjwolpip8dbmysmmf9qwyycz2ezlzgqoea9sctadxb2zfmqmay9jovknt9u44ffz9v3tzpqa9j2vy9cxhcslx4cbkqs4l4kyrqj9l2h488bv6s19jwm5321plr225kkdzgc0yfljtjicayqvydktssmlmrrherl1uj626vd1x4popazhw21b5orrzefvuld95phenmcm2rteeytbfklcv8k4vx4gi2xnevkltps6n54qpbcuir3qej6l6tsextadvxemp1e4gor0ng2oq92jl0gei704yiw5942sdvdh8pntfxyxcij8ncplygz0pit5aio13inns133z43xe6xhpqlq6tpw1mewnq1qqx9mgmwhz6h5niedbrs6who7bivfutt75vht2o87fmbmfi9j9f023685275vhon1bwnlii4zow06uo40q1htyey4exccj1sqn95j5dbrsc3xddlqs2z6rkjjcm2pfoixe27uczwhoj571x5t0j9dlhknzpcj3t6gpp9dxj76aq32tq1raohskwsb64wevqe7bz1r0rz9lv82dgkorztez6vv4k2cgdoek1e8di67j866r9dlzmzbcokymfstaof6r5xh0hu9e53i9q1cq4g4ysf6l1or4p55zv2kdsyst5qbk7f3zv07fbpjqa2q84reqnnu0oa3ka92qo4b06vps7ahzp5uc91d5tzuvakf68t6gn7f9s82z4ei9ece5y2o0ptbcb3sfu29l3rwk093vkde8nanpnbvgpqkg6tv08m0nc0wnna8k2mp1fls59l9hz7cbh4i98l27belijlr67ohw5yi6j6ijliaexgokjzl7zejssfunshdflys334asfzz4lr24lwpepts3f9txfvn1yia6gxp1famzzvi31nyojyfb8l9putevshcfzx7ufh0e9sqknhmnqefya64q67riaco9lfmc5o0falx62hxll2zavrvdsfkdcvdv8ok85aj1qzb49vvw2ld9koiottzswfpieypjns9y261xt1tqesrqrud2kp1nke29imjg5py1m3pl35j55ev4qs6ay2p50j5o56tq9ejk46i6zzc3eogu1sq54a98165qs9ucq2yh5hik586ehttifuzniauybpsw5gmprqaj02phy308dzb0ogt9z9ho62zqpf1czdbg4p4ew8v0cnu3ktq0js7zfrhaw4j70l9qrugfpp9oih1eaaspse31fckrq1m9ayew9yeshpl8hxp69hj7ftogb52s1zd9ls66ljhwbocj6855cf58rqyoq50gabprgzvbqnvhf0l7bmu12bui7clfotifqfwhp2rb2b29dlcm1b5khv7mfj1ptp80oein9gyge3nznflv1ynzxjtd6gzu9157j247u3c5ynkwz6c6oze8oodebdca4hxeiol81bgcinhpjdgvrowalwwh7yz34jp5j3pyi9d2wkanzcjevehakkn36kyov6agnvyye4selr65gz6ljrdsohpdolnuald3of3g4tmas2icxult1z53z0dhpfkjhl72y1g8o25fz4fakchvp51hhccihtrsju1qvkdq1kyaw8t8a4tbi1sarmnqepqiolxix4uinkwl3bcujeppt5lgafewa6m282q42667hkvi99l670ixwbiirixpnu2vdmp0xgern2vlxnzw8erdzg1hvkqsuzgtmjapu71pvvhm4bohptwkem4fi09hifnsompxmayaewlq47bme6dp3wz471k46w21qtbjmokti2ww7u6y83uq7uyufatv8z0oj8jzb4rp19ry9zohrrae9z6o5ks2dcm7mb5ohuizpledu87dptjppzks5o6l0qljvukybeth2nv4wbynr67vxxo1jo6737fictk1uaesebyi3vye20bid790cxa9zm951uxxojmx227wjujix8czq4xym2xhgspha944rzf8t11qx42vvugp13yz41u67yftr3vjusnqza1qia7tec3pp7jqh6v1lbnoqjbbunvxtvqrnnz1v490w66vxfh46rz525n2nzcgc8050nmeebgtwfhcrne812u0esppwo3suulg3cy51yqby2q2gsgzngbn6jxpc30uijlsjoi6vzub9acfinfuf9aj8v7qutq3crc0cebd47wfxh5llnfhng5d16r10dedu1jevj7pv923clkywmcx6fynb9xlkhtfz6pgsxds306yzlza7zblfd5wc1vlbmd5u1fdz6kzxhmkeg4eupbk837scety7okhu8ubyxw7mjsb96fd59givt76qhuh0ln5hccwns1bja50qm8b0sbeu7da10037e0v1eeb6x95x0lcinhhxobm6kaitcthdx23cjsd55f8l49gak0yil4ezb17fkm7si31q6hcjw9ueemjpajw7mhxu36ro9zolsekohvxoqtk3890ypq7rjk8t33k4o5nh7prolfdodhvylmkdvcef2duj8ozzk6hzy1uhs3t7hjn7mawaboty8p5tgf7cs7ygzzzu6c9kimfhdfd7rs01wlqeczpm587jvt2fet3mplsxnjjctv4ub7orj4o944letc2wfqaqkwuxnx3igxvvassg8l43pb8nbpoa7ksva2d13pgrhyruncj5hmspe6zbucx3ku6ey25uzql3er1gffo77taimrd2j2utp1oto7r9jjo4ev9sc7h74qx0bwb02qzypx6lwsqda32rud0mdzfc2srn6sa7x23j62z85cj427d4flaq1p8wk3byggk0sart6eroaoxvyubl0av5nsdgoaejmn0wgbh1na82t8h9ihpdj7a75uehcfzsffyo0ne1rbooupsh7zweqlowwzyw3y9buxf2saon9jbpwvsa3fs9pn1fw5c12wyhuw0uabzbycs0o6v60p9gyur73gqxzdibjnzacvnz8zm9upjbsnyv6f480wsiqtyp7t4cze == \i\k\n\8\9\l\l\5\h\o\0\x\8\v\n\t\u\k\h\r\b\i\z\b\u\z\o\t\4\2\g\r\6\c\g\0\7\0\9\c\1\f\z\y\8\p\h\e\2\p\h\g\b\a\6\2\r\y\p\d\8\5\d\5\k\b\c\l\j\i\3\q\u\1\7\w\q\4\j\i\z\1\7\9\l\1\d\p\d\d\i\d\d\q\v\d\m\c\q\5\a\2\p\g\5\q\2\w\4\p\h\m\n\b\2\o\c\h\k\k\y\v\4\g\2\6\w\0\k\h\9\8\w\s\k\c\k\4\h\h\f\g\d\c\9\e\o\1\d\h\w\5\f\b\0\v\3\1\s\j\g\m\7\y\3\i\t\x\s\w\o\f\o\k\b\b\e\r\0\s\d\a\2\x\9\2\o\a\6\t\b\r\y\6\h\x\e\v\j\e\4\5\d\s\n\6\i\f\6\x\y\o\h\3\5\d\6\1\5\8\m\z\h\v\a\4\0\0\z\a\n\t\f\b\8\e\w\w\j\f\r\u\3\z\n\5\b\a\r\j\y\h\t\u\y\5\8\f\a\f\0\3\h\u\3\4\7\v\3\o\t\c\5\v\r\b\k\h\t\p\y\n\p\3\5\k\e\2\1\n\0\9\8\r\m\f\c\p\e\a\h\5\l\y\l\7\c\f\b\i\f\2\7\w\w\b\5\3\m\z\i\z\4\9\y\2\z\p\4\3\w\4\v\q\j\u\x\c\p\f\a\a\t\j\n\y\9\u\7\u\0\s\m\3\w\g\z\b\1\e\9\a\5\3\e\z\u\p\1\a\c\i\6\u\3\r\0\t\4\n\w\1\y\p\w\g\i\4\s\l\n\s\q\m\j\j\p\8\0\p\j\8\t\n\8\b\p\3\e\o\n\k\1\v\k\d\g\n\i\d\b\k\0\0\7\w\g\i\4\x\z\x\d\8\4\d\a\5\q\q\u\t\0\4\p\f\5\b\2\u\q\5\c\p\p\0\q\1\7\k\4\a\8\e\n\e\v\k\5\2\4\p\9\8\g\p\o\i\x\2\f\6\p\d\6\i\v\k\7\w\u\8\f\g\n\3\4\3\c\7\z\2\0\v\r\a\n\z\z\0\q\p\m\k\i\b\3\z\t\r\p\g\t\6\l\c\u\o\t\j\a\r\j\c\o\o\c\3\4\r\l\k\p\f\w\s\k\c\r\2\c\j\k\s\u\l\h\e\9\g\3\x\l\8\c\6\6\e\c\q\v\t\0\q\q\d\t\x\v\9\j\5\9\s\9\t\6\8\0\o\b\t\x\9\u\3\c\b\a\w\9\a\l\3\d\w\r\w\m\b\c\f\h\c\4\c\t\3\b\e\g\y\m\8\1\r\e\w\7\q\8\9\8\0\q\z\h\k\z\v\x\m\x\n\8\a\n\5\4\e\4\j\a\w\d\d\m\j\n\a\4\h\l\y\b\k\s\t\9\g\w\i\n\o\w\w\9\f\u\b\j\p\8\b\l\b\6\g\5\2\t\b\m\n\9\7\v\m\b\j\r\b\s\h\q\o\h\5\2\8\h\u\k\m\z\s\u\o\r\8\v\9\x\v\e\1\2\x\e\2\l\b\m\x\8\v\8\0\r\v\s\k\i\5\4\c\g\c\s\7\n\g\y\m\e\w\h\y\z\8\i\8\i\o\w\4\0\e\a\a\7\3\h\u\r\8\q\5\j\m\3\x\x\0\p\7\3\8\a\m\n\v\v\9\9\n\t\r\l\7\m\4\5\e\n\b\b\l\0\e\7\i\h\f\8\8\3\p\3\e\5\e\b\j\b\8\m\t\v\k\u\z\8\b\y\d\e\k\m\h\8\t\h\p\7\5\o\7\i\g\8\x\8\d\i\v\j\d\d\2\x\c\8\c\y\2\v\u\t\y\b\i\f\o\9\a\c\s\c\0\u\p\k\l\v\z\i\y\3\f\v\0\r\y\c\a\c\q\4\v\m\m\6\j\d\i\x\0\j\h\r\z\p\u\x\y\6\a\4\7\r\j\2\v\9\a\z\8\7\p\6\d\7\o\l\m\7\p\7\2\x\5\t\1\v\b\6\c\0\r\i\w\5\b\g\o\h\c\a\2\l\v\d\z\w\5\z\d\w\8\l\p\p\n\5\m\8\2\z\c\5\p\f\o\0\y\l\d\7\y\j\e\y\n\n\b\8\9\1\h\p\m\f\g\t\z\6\f\x\x\z\w\z\4\z\d\3\g\x\s\f\3\o\f\y\n\a\h\5\3\r\j\1\s\i\u\t\8\j\a\o\9\l\x\u\s\2\v\5\l\f\c\s\v\0\o\7\b\5\u\h\g\t\g\8\h\8\l\k\1\8\p\h\c\l\9\x\s\q\2\p\n\2\e\y\j\w\o\l\p\i\p\8\d\b\m\y\s\m\m\f\9\q\w\y\y\c\z\2\e\z\l\z\g\q\o\e\a\9\s\c\t\a\d\x\b\2\z\f\m\q\m\a\y\9\j\o\v\k\n\t\9\u\4\4\f\f\z\9\v\3\t\z\p\q\a\9\j\2\v\y\9\c\x\h\c\s\l\x\4\c\b\k\q\s\4\l\4\k\y\r\q\j\9\l\2\h\4\8\8\b\v\6\s\1\9\j\w\m\5\3\2\1\p\l\r\2\2\5\k\k\d\z\g\c\0\y\f\l\j\t\j\i\c\a\y\q\v\y\d\k\t\s\s\m\l\m\r\r\h\e\r\l\1\u\j\6\2\6\v\d\1\x\4\p\o\p\a\z\h\w\2\1\b\5\o\r\r\z\e\f\v\u\l\d\9\5\p\h\e\n\m\c\m\2\r\t\e\e\y\t\b\f\k\l\c\v\8\k\4\v\x\4\g\i\2\x\n\e\v\k\l\t\p\s\6\n\5\4\q\p\b\c\u\i\r\3\q\e\j\6\l\6\t\s\e\x\t\a\d\v\x\e\m\p\1\e\4\g\o\r\0\n\g\2\o\q\9\2\j\l\0\g\e\i\7\0\4\y\i\w\5\9\4\2\s\d\v\d\h\8\p\n\t\f\x\y\x\c\i\j\8\n\c\p\l\y\g\z\0\p\i\t\5\a\i\o\1\3\i\n\n\s\1\3\3\z\4\3\x\e\6\x\h\p\q\l\q\6\t\p\w\1\m\e\w\n\q\1\q\q\x\9\m\g\m\w\h\z\6\h\5\n\i\e\d\b\r\s\6\w\h\o\7\b\i\v\f\u\t\t\7\5\v\h\t\2\o\8\7\f\m\b\m\f\i\9\j\9\f\0\2\3\6\8\5\2\7\5\v\h\o\n\1\b\w\n\l\i\i\4\z\o\w\0\6\u\o\4\0\q\1\h\t\y\e\y\4\e\x\c\c\j\1\s\q\n\9\5\j\5\d\b\r\s\c\3\x\d\d\l\q\s\2\z\6\r\k\j\j\c\m\2\p\f\o\i\x\e\2\7\u\c\z\w\h\o\j\5\7\1\x\5\t\0\j\9\d\l\h\k\n\z\p\c\j\3\t\6\g\p\p\9\d\x\j\7\6\a\q\3\2\t\q\1\r\a\o\h\s\k\w\s\b\6\4\w\e\v\q\e\7\b\z\1\r\0\r\z\9\l\v\8\2\d\g\k\o\r\z\t\e\z\6\v\v\4\k\2\c\g\d\o\e\k\1\e\8\d\i\6\7\j\8\6\6\r\9\d\l\z\m\z\b\c\o\k\y\m\f\s\t\a\o\f\6\r\5\x\h\0\h\u\9\e\5\3\i\9\q\1\c\q\4\g\4\y\s\f\6\l\1\o\r\4\p\5\5\z\v\2\k\d\s\y\s\t\5\q\b\k\7\f\3\z\v\0\7\f\b\p\j\q\a\2\q\8\4\r\e\q\n\n\u\0\o\a\3\k\a\9\2\q\o\4\b\0\6\v\p\s\7\a\h\z\p\5\u\c\9\1\d\5\t\z\u\v\a\k\f\6\8\t\6\g\n\7\f\9\s\8\2\z\4\e\i\9\e\c\e\5\y\2\o\0\p\t\b\c\b\3\s\f\u\2\9\l\3\r\w\k\0\9\3\v\k\d\e\8\n\a\n\p\n\b\v\g\p\q\k\g\6\t\v\0\8\m\0\n\c\0\w\n\n\a\8\k\2\m\p\1\f\l\s\5\9\l\9\h\z\7\c\b\h\4\i\9\8\l\2\7\b\e\l\i\j\l\r\6\7\o\h\w\5\y\i\6\j\6\i\j\l\i\a\e\x\g\o\k\j\z\l\7\z\e\j\s\s\f\u\n\s\h\d\f\l\y\s\3\3\4\a\s\f\z\z\4\l\r\2\4\l\w\p\e\p\t\s\3\f\9\t\x\f\v\n\1\y\i\a\6\g\x\p\1\f\a\m\z\z\v\i\3\1\n\y\o\j\y\f\b\8\l\9\p\u\t\e\v\s\h\c\f\z\x\7\u\f\h\0\e\9\s\q\k\n\h\m\n\q\e\f\y\a\6\4\q\6\7\r\i\a\c\o\9\l\f\m\c\5\o\0\f\a\l\x\6\2\h\x\l\l\2\z\a\v\r\v\d\s\f\k\d\c\v\d\v\8\o\k\8\5\a\j\1\q\z\b\4\9\v\v\w\2\l\d\9\k\o\i\o\t\t\z\s\w\f\p\i\e\y\p\j\n\s\9\y\2\6\1\x\t\1\t\q\e\s\r\q\r\u\d\2\k\p\1\n\k\e\2\9\i\m\j\g\5\p\y\1\m\3\p\l\3\5\j\5\5\e\v\4\q\s\6\a\y\2\p\5\0\j\5\o\5\6\t\q\9\e\j\k\4\6\i\6\z\z\c\3\e\o\g\u\1\s\q\5\4\a\9\8\1\6\5\q\s\9\u\c\q\2\y\h\5\h\i\k\5\8\6\e\h\t\t\i\f\u\z\n\i\a\u\y\b\p\s\w\5\g\m\p\r\q\a\j\0\2\p\h\y\3\0\8\d\z\b\0\o\g\t\9\z\9\h\o\6\2\z\q\p\f\1\c\z\d\b\g\4\p\4\e\w\8\v\0\c\n\u\3\k\t\q\0\j\s\7\z\f\r\h\a\w\4\j\7\0\l\9\q\r\u\g\f\p\p\9\o\i\h\1\e\a\a\s\p\s\e\3\1\f\c\k\r\q\1\m\9\a\y\e\w\9\y\e\s\h\p\l\8\h\x\p\6\9\h\j\7\f\t\o\g\b\5\2\s\1\z\d\9\l\s\6\6\l\j\h\w\b\o\c\j\6\8\5\5\c\f\5\8\r\q\y\o\q\5\0\g\a\b\p\r\g\z\v\b\q\n\v\h\f\0\l\7\b\m\u\1\2\b\u\i\7\c\l\f\o\t\i\f\q\f\w\h\p\2\r\b\2\b\2\9\d\l\c\m\1\b\5\k\h\v\7\m\f\j\1\p\t\p\8\0\o\e\i\n\9\g\y\g\e\3\n\z\n\f\l\v\1\y\n\z\x\j\t\d\6\g\z\u\9\1\5\7\j\2\4\7\u\3\c\5\y\n\k\w\z\6\c\6\o\z\e\8\o\o\d\e\b\d\c\a\4\h\x\e\i\o\l\8\1\b\g\c\i\n\h\p\j\d\g\v\r\o\w\a\l\w\w\h\7\y\z\3\4\j\p\5\j\3\p\y\i\9\d\2\w\k\a\n\z\c\j\e\v\e\h\a\k\k\n\3\6\k\y\o\v\6\a\g\n\v\y\y\e\4\s\e\l\r\6\5\g\z\6\l\j\r\d\s\o\h\p\d\o\l\n\u\a\l\d\3\o\f\3\g\4\t\m\a\s\2\i\c\x\u\l\t\1\z\5\3\z\0\d\h\p\f\k\j\h\l\7\2\y\1\g\8\o\2\5\f\z\4\f\a\k\c\h\v\p\5\1\h\h\c\c\i\h\t\r\s\j\u\1\q\v\k\d\q\1\k\y\a\w\8\t\8\a\4\t\b\i\1\s\a\r\m\n\q\e\p\q\i\o\l\x\i\x\4\u\i\n\k\w\l\3\b\c\u\j\e\p\p\t\5\l\g\a\f\e\w\a\6\m\2\8\2\q\4\2\6\6\7\h\k\v\i\9\9\l\6\7\0\i\x\w\b\i\i\r\i\x\p\n\u\2\v\d\m\p\0\x\g\e\r\n\2\v\l\x\n\z\w\8\e\r\d\z\g\1\h\v\k\q\s\u\z\g\t\m\j\a\p\u\7\1\p\v\v\h\m\4\b\o\h\p\t\w\k\e\m\4\f\i\0\9\h\i\f\n\s\o\m\p\x\m\a\y\a\e\w\l\q\4\7\b\m\e\6\d\p\3\w\z\4\7\1\k\4\6\w\2\1\q\t\b\j\m\o\k\t\i\2\w\w\7\u\6\y\8\3\u\q\7\u\y\u\f\a\t\v\8\z\0\o\j\8\j\z\b\4\r\p\1\9\r\y\9\z\o\h\r\r\a\e\9\z\6\o\5\k\s\2\d\c\m\7\m\b\5\o\h\u\i\z\p\l\e\d\u\8\7\d\p\t\j\p\p\z\k\s\5\o\6\l\0\q\l\j\v\u\k\y\b\e\t\h\2\n\v\4\w\b\y\n\r\6\7\v\x\x\o\1\j\o\6\7\3\7\f\i\c\t\k\1\u\a\e\s\e\b\y\i\3\v\y\e\2\0\b\i\d\7\9\0\c\x\a\9\z\m\9\5\1\u\x\x\o\j\m\x\2\2\7\w\j\u\j\i\x\8\c\z\q\4\x\y\m\2\x\h\g\s\p\h\a\9\4\4\r\z\f\8\t\1\1\q\x\4\2\v\v\u\g\p\1\3\y\z\4\1\u\6\7\y\f\t\r\3\v\j\u\s\n\q\z\a\1\q\i\a\7\t\e\c\3\p\p\7\j\q\h\6\v\1\l\b\n\o\q\j\b\b\u\n\v\x\t\v\q\r\n\n\z\1\v\4\9\0\w\6\6\v\x\f\h\4\6\r\z\5\2\5\n\2\n\z\c\g\c\8\0\5\0\n\m\e\e\b\g\t\w\f\h\c\r\n\e\8\1\2\u\0\e\s\p\p\w\o\3\s\u\u\l\g\3\c\y\5\1\y\q\b\y\2\q\2\g\s\g\z\n\g\b\n\6\j\x\p\c\3\0\u\i\j\l\s\j\o\i\6\v\z\u\b\9\a\c\f\i\n\f\u\f\9\a\j\8\v\7\q\u\t\q\3\c\r\c\0\c\e\b\d\4\7\w\f\x\h\5\l\l\n\f\h\n\g\5\d\1\6\r\1\0\d\e\d\u\1\j\e\v\j\7\p\v\9\2\3\c\l\k\y\w\m\c\x\6\f\y\n\b\9\x\l\k\h\t\f\z\6\p\g\s\x\d\s\3\0\6\y\z\l\z\a\7\z\b\l\f\d\5\w\c\1\v\l\b\m\d\5\u\1\f\d\z\6\k\z\x\h\m\k\e\g\4\e\u\p\b\k\8\3\7\s\c\e\t\y\7\o\k\h\u\8\u\b\y\x\w\7\m\j\s\b\9\6\f\d\5\9\g\i\v\t\7\6\q\h\u\h\0\l\n\5\h\c\c\w\n\s\1\b\j\a\5\0\q\m\8\b\0\s\b\e\u\7\d\a\1\0\0\3\7\e\0\v\1\e\e\b\6\x\9\5\x\0\l\c\i\n\h\h\x\o\b\m\6\k\a\i\t\c\t\h\d\x\2\3\c\j\s\d\5\5\f\8\l\4\9\g\a\k\0\y\i\l\4\e\z\b\1\7\f\k\m\7\s\i\3\1\q\6\h\c\j\w\9\u\e\e\m\j\p\a\j\w\7\m\h\x\u\3\6\r\o\9\z\o\l\s\e\k\o\h\v\x\o\q\t\k\3\8\9\0\y\p\q\7\r\j\k\8\t\3\3\k\4\o\5\n\h\7\p\r\o\l\f\d\o\d\h\v\y\l\m\k\d\v\c\e\f\2\d\u\j\8\o\z\z\k\6\h\z\y\1\u\h\s\3\t\7\h\j\n\7\m\a\w\a\b\o\t\y\8\p\5\t\g\f\7\c\s\7\y\g\z\z\z\u\6\c\9\k\i\m\f\h\d\f\d\7\r\s\0\1\w\l\q\e\c\z\p\m\5\8\7\j\v\t\2\f\e\t\3\m\p\l\s\x\n\j\j\c\t\v\4\u\b\7\o\r\j\4\o\9\4\4\l\e\t\c\2\w\f\q\a\q\k\w\u\x\n\x\3\i\g\x\v\v\a\s\s\g\8\l\4\3\p\b\8\n\b\p\o\a\7\k\s\v\a\2\d\1\3\p\g\r\h\y\r\u\n\c\j\5\h\m\s\p\e\6\z\b\u\c\x\3\k\u\6\e\y\2\5\u\z\q\l\3\e\r\1\g\f\f\o\7\7\t\a\i\m\r\d\2\j\2\u\t\p\1\o\t\o\7\r\9\j\j\o\4\e\v\9\s\c\7\h\7\4\q\x\0\b\w\b\0\2\q\z\y\p\x\6\l\w\s\q\d\a\3\2\r\u\d\0\m\d\z\f\c\2\s\r\n\6\s\a\7\x\2\3\j\6\2\z\8\5\c\j\4\2\7\d\4\f\l\a\q\1\p\8\w\k\3\b\y\g\g\k\0\s\a\r\t\6\e\r\o\a\o\x\v\y\u\b\l\0\a\v\5\n\s\d\g\o\a\e\j\m\n\0\w\g\b\h\1\n\a\8\2\t\8\h\9\i\h\p\d\j\7\a\7\5\u\e\h\c\f\z\s\f\f\y\o\0\n\e\1\r\b\o\o\u\p\s\h\7\z\w\e\q\l\o\w\w\z\y\w\3\y\9\b\u\x\f\2\s\a\o\n\9\j\b\p\w\v\s\a\3\f\s\9\p\n\1\f\w\5\c\1\2\w\y\h\u\w\0\u\a\b\z\b\y\c\s\0\o\6\v\6\0\p\9\g\y\u\r\7\3\g\q\x\z\d\i\b\j\n\z\a\c\v\n\z\8\z\m\9\u\p\j\b\s\n\y\v\6\f\4\8\0\w\s\i\q\t\y\p\7\t\4\c\z\e ]] 00:06:14.910 ************************************ 00:06:14.910 END TEST dd_rw_offset 00:06:14.910 ************************************ 00:06:14.910 00:06:14.910 real 0m1.429s 00:06:14.910 user 0m0.979s 00:06:14.910 sys 0m0.622s 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.910 09:12:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.910 [2024-10-08 09:12:06.451163] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:14.910 [2024-10-08 09:12:06.451256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60350 ] 00:06:14.910 { 00:06:14.910 "subsystems": [ 00:06:14.910 { 00:06:14.910 "subsystem": "bdev", 00:06:14.910 "config": [ 00:06:14.910 { 00:06:14.910 "params": { 00:06:14.910 "trtype": "pcie", 00:06:14.910 "traddr": "0000:00:10.0", 00:06:14.910 "name": "Nvme0" 00:06:14.910 }, 00:06:14.910 "method": "bdev_nvme_attach_controller" 00:06:14.910 }, 00:06:14.910 { 00:06:14.910 "method": "bdev_wait_for_examine" 00:06:14.910 } 00:06:14.910 ] 00:06:14.910 } 00:06:14.910 ] 00:06:14.910 } 00:06:14.910 [2024-10-08 09:12:06.580870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.168 [2024-10-08 09:12:06.706911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.168 [2024-10-08 09:12:06.765374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.427  [2024-10-08T09:12:07.370Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:15.687 00:06:15.687 09:12:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.687 00:06:15.687 real 0m18.891s 00:06:15.687 user 0m13.473s 00:06:15.687 sys 0m7.038s 00:06:15.687 09:12:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.687 09:12:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.687 ************************************ 00:06:15.687 END TEST spdk_dd_basic_rw 00:06:15.687 ************************************ 00:06:15.687 09:12:07 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:15.687 09:12:07 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.687 09:12:07 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.687 09:12:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:15.687 ************************************ 00:06:15.687 START TEST spdk_dd_posix 00:06:15.687 ************************************ 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:15.687 * Looking for test storage... 00:06:15.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.687 --rc genhtml_branch_coverage=1 00:06:15.687 --rc genhtml_function_coverage=1 00:06:15.687 --rc genhtml_legend=1 00:06:15.687 --rc geninfo_all_blocks=1 00:06:15.687 --rc geninfo_unexecuted_blocks=1 00:06:15.687 00:06:15.687 ' 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.687 --rc genhtml_branch_coverage=1 00:06:15.687 --rc genhtml_function_coverage=1 00:06:15.687 --rc genhtml_legend=1 00:06:15.687 --rc geninfo_all_blocks=1 00:06:15.687 --rc geninfo_unexecuted_blocks=1 00:06:15.687 00:06:15.687 ' 00:06:15.687 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.687 --rc genhtml_branch_coverage=1 00:06:15.687 --rc genhtml_function_coverage=1 00:06:15.687 --rc genhtml_legend=1 00:06:15.687 --rc geninfo_all_blocks=1 00:06:15.687 --rc geninfo_unexecuted_blocks=1 00:06:15.687 00:06:15.687 ' 00:06:15.688 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.688 --rc genhtml_branch_coverage=1 00:06:15.688 --rc genhtml_function_coverage=1 00:06:15.688 --rc genhtml_legend=1 00:06:15.688 --rc geninfo_all_blocks=1 00:06:15.688 --rc geninfo_unexecuted_blocks=1 00:06:15.688 00:06:15.688 ' 00:06:15.688 09:12:07 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.688 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.946 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.946 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.946 09:12:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:15.947 * First test run, liburing in use 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:15.947 ************************************ 00:06:15.947 START TEST dd_flag_append 00:06:15.947 ************************************ 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=xprt9tuonatu1w1bkq68b0usm3yol0sc 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=cp6jsrbt1bdybydvorgqzgqz4qzb15lg 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s xprt9tuonatu1w1bkq68b0usm3yol0sc 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s cp6jsrbt1bdybydvorgqzgqz4qzb15lg 00:06:15.947 09:12:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:15.947 [2024-10-08 09:12:07.456053] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:15.947 [2024-10-08 09:12:07.456152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60421 ] 00:06:15.947 [2024-10-08 09:12:07.595839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.205 [2024-10-08 09:12:07.718747] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.205 [2024-10-08 09:12:07.776461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.205  [2024-10-08T09:12:08.146Z] Copying: 32/32 [B] (average 31 kBps) 00:06:16.463 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ cp6jsrbt1bdybydvorgqzgqz4qzb15lgxprt9tuonatu1w1bkq68b0usm3yol0sc == \c\p\6\j\s\r\b\t\1\b\d\y\b\y\d\v\o\r\g\q\z\g\q\z\4\q\z\b\1\5\l\g\x\p\r\t\9\t\u\o\n\a\t\u\1\w\1\b\k\q\6\8\b\0\u\s\m\3\y\o\l\0\s\c ]] 00:06:16.463 00:06:16.463 real 0m0.657s 00:06:16.463 user 0m0.382s 00:06:16.463 sys 0m0.298s 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:16.463 ************************************ 00:06:16.463 END TEST dd_flag_append 00:06:16.463 ************************************ 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.463 ************************************ 00:06:16.463 START TEST dd_flag_directory 00:06:16.463 ************************************ 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.463 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.464 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.464 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:16.722 [2024-10-08 09:12:08.159393] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:16.722 [2024-10-08 09:12:08.159488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60449 ] 00:06:16.722 [2024-10-08 09:12:08.298086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.981 [2024-10-08 09:12:08.413312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.981 [2024-10-08 09:12:08.472945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.981 [2024-10-08 09:12:08.513383] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.981 [2024-10-08 09:12:08.513438] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.981 [2024-10-08 09:12:08.513458] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.981 [2024-10-08 09:12:08.635284] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.239 09:12:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.239 [2024-10-08 09:12:08.806482] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:17.239 [2024-10-08 09:12:08.806582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:06:17.498 [2024-10-08 09:12:08.948543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.498 [2024-10-08 09:12:09.065946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.498 [2024-10-08 09:12:09.127811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.498 [2024-10-08 09:12:09.168137] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.498 [2024-10-08 09:12:09.168197] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.498 [2024-10-08 09:12:09.168216] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.757 [2024-10-08 09:12:09.296043] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.757 00:06:17.757 real 0m1.308s 00:06:17.757 user 0m0.771s 00:06:17.757 sys 0m0.324s 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.757 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:17.757 ************************************ 00:06:17.757 END TEST dd_flag_directory 00:06:17.757 ************************************ 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.016 ************************************ 00:06:18.016 START TEST dd_flag_nofollow 00:06:18.016 ************************************ 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.016 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.017 09:12:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.017 [2024-10-08 09:12:09.532395] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:18.017 [2024-10-08 09:12:09.532496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60493 ] 00:06:18.017 [2024-10-08 09:12:09.683811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.277 [2024-10-08 09:12:09.802600] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.277 [2024-10-08 09:12:09.859354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.277 [2024-10-08 09:12:09.897003] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:18.277 [2024-10-08 09:12:09.897066] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:18.277 [2024-10-08 09:12:09.897084] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.535 [2024-10-08 09:12:10.016021] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.535 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.535 [2024-10-08 09:12:10.179963] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:18.535 [2024-10-08 09:12:10.180062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:06:18.794 [2024-10-08 09:12:10.317460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.794 [2024-10-08 09:12:10.433349] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.052 [2024-10-08 09:12:10.494485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.052 [2024-10-08 09:12:10.533815] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.052 [2024-10-08 09:12:10.533882] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.052 [2024-10-08 09:12:10.533902] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.052 [2024-10-08 09:12:10.652918] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:19.311 09:12:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.311 [2024-10-08 09:12:10.815782] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:19.311 [2024-10-08 09:12:10.815920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60515 ] 00:06:19.311 [2024-10-08 09:12:10.956632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.570 [2024-10-08 09:12:11.090538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.570 [2024-10-08 09:12:11.147248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.570  [2024-10-08T09:12:11.511Z] Copying: 512/512 [B] (average 500 kBps) 00:06:19.828 00:06:19.828 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 306fly2xp5xpwnopcja22ots0us0ah5x7ya5e6bzl03ki0lxdyqwmsrk4oufkfojh0bu7kinbf2jhgnb35uv0ns21zdq7a0lq5oa7fuqjxiw3fc37e4a4scmtzhuggbury43o1mi0byl0vsybvd2wnnge8ei3pa9gs5nf1kt4j7mllfimurgldo5bnt48podlyuzho9ktv03r43bqr787ojc80oc7ccaif4eqgz0c363uddlfo7uty9n56umfk2gnuenwipkpy0gk0yhova5677cfpsrfx8x6ym8a0dy4oclh125m9y2980d60g9p0ka5l4m6cdutrvitpq6suzkgxq0jxfrkb04q4tc31pi6kna6w0c8aqyl6h4afwq9simqtfnhs8pakojkgxe04c9st36fpgq6pybvvn313jcmlizdofaukd3zjf5bcapojbe6rtk26244hk7rm3nsuzikee7fbshhpnvhey8mtv5xnje1wvubl6quzwrk2cdvy97 == \3\0\6\f\l\y\2\x\p\5\x\p\w\n\o\p\c\j\a\2\2\o\t\s\0\u\s\0\a\h\5\x\7\y\a\5\e\6\b\z\l\0\3\k\i\0\l\x\d\y\q\w\m\s\r\k\4\o\u\f\k\f\o\j\h\0\b\u\7\k\i\n\b\f\2\j\h\g\n\b\3\5\u\v\0\n\s\2\1\z\d\q\7\a\0\l\q\5\o\a\7\f\u\q\j\x\i\w\3\f\c\3\7\e\4\a\4\s\c\m\t\z\h\u\g\g\b\u\r\y\4\3\o\1\m\i\0\b\y\l\0\v\s\y\b\v\d\2\w\n\n\g\e\8\e\i\3\p\a\9\g\s\5\n\f\1\k\t\4\j\7\m\l\l\f\i\m\u\r\g\l\d\o\5\b\n\t\4\8\p\o\d\l\y\u\z\h\o\9\k\t\v\0\3\r\4\3\b\q\r\7\8\7\o\j\c\8\0\o\c\7\c\c\a\i\f\4\e\q\g\z\0\c\3\6\3\u\d\d\l\f\o\7\u\t\y\9\n\5\6\u\m\f\k\2\g\n\u\e\n\w\i\p\k\p\y\0\g\k\0\y\h\o\v\a\5\6\7\7\c\f\p\s\r\f\x\8\x\6\y\m\8\a\0\d\y\4\o\c\l\h\1\2\5\m\9\y\2\9\8\0\d\6\0\g\9\p\0\k\a\5\l\4\m\6\c\d\u\t\r\v\i\t\p\q\6\s\u\z\k\g\x\q\0\j\x\f\r\k\b\0\4\q\4\t\c\3\1\p\i\6\k\n\a\6\w\0\c\8\a\q\y\l\6\h\4\a\f\w\q\9\s\i\m\q\t\f\n\h\s\8\p\a\k\o\j\k\g\x\e\0\4\c\9\s\t\3\6\f\p\g\q\6\p\y\b\v\v\n\3\1\3\j\c\m\l\i\z\d\o\f\a\u\k\d\3\z\j\f\5\b\c\a\p\o\j\b\e\6\r\t\k\2\6\2\4\4\h\k\7\r\m\3\n\s\u\z\i\k\e\e\7\f\b\s\h\h\p\n\v\h\e\y\8\m\t\v\5\x\n\j\e\1\w\v\u\b\l\6\q\u\z\w\r\k\2\c\d\v\y\9\7 ]] 00:06:19.828 00:06:19.828 real 0m1.953s 00:06:19.828 user 0m1.149s 00:06:19.828 sys 0m0.616s 00:06:19.828 ************************************ 00:06:19.828 END TEST dd_flag_nofollow 00:06:19.828 ************************************ 00:06:19.828 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.828 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:19.828 09:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:19.828 09:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.828 09:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.828 09:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:19.828 ************************************ 00:06:19.828 START TEST dd_flag_noatime 00:06:19.828 ************************************ 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1728378731 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1728378731 00:06:19.829 09:12:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:21.220 09:12:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.220 [2024-10-08 09:12:12.542981] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:21.220 [2024-10-08 09:12:12.543094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60558 ] 00:06:21.220 [2024-10-08 09:12:12.682057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.220 [2024-10-08 09:12:12.810620] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.220 [2024-10-08 09:12:12.868927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.496  [2024-10-08T09:12:13.179Z] Copying: 512/512 [B] (average 500 kBps) 00:06:21.496 00:06:21.496 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.496 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1728378731 )) 00:06:21.496 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.496 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1728378731 )) 00:06:21.496 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.755 [2024-10-08 09:12:13.209091] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:21.755 [2024-10-08 09:12:13.209209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60571 ] 00:06:21.755 [2024-10-08 09:12:13.349699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.013 [2024-10-08 09:12:13.476464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.013 [2024-10-08 09:12:13.535896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.013  [2024-10-08T09:12:13.955Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.272 00:06:22.272 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.272 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1728378733 )) 00:06:22.272 00:06:22.272 real 0m2.346s 00:06:22.272 user 0m0.784s 00:06:22.272 sys 0m0.608s 00:06:22.272 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.272 09:12:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:22.272 ************************************ 00:06:22.273 END TEST dd_flag_noatime 00:06:22.273 ************************************ 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:22.273 ************************************ 00:06:22.273 START TEST dd_flags_misc 00:06:22.273 ************************************ 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.273 09:12:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:22.273 [2024-10-08 09:12:13.924240] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:22.273 [2024-10-08 09:12:13.924347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:06:22.531 [2024-10-08 09:12:14.064771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.531 [2024-10-08 09:12:14.181520] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.790 [2024-10-08 09:12:14.236307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.790  [2024-10-08T09:12:14.732Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.049 00:06:23.049 09:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2mrq9t00fbu6l8ts6erwrkvknv875sfc96lzfz72upne0jvlg40du164pp2cfdcqb6q8kb2fyccw987tqqx7n70e7btbpyi9f2qmczdl9v84xowgtij64wjz6aa9hi93tpp4n1eib9r2pw77y6dxn9ggnffwoefv01xdbprgjjc9cpdr82qs0c25jbnsehxuudzirsunqnfd7oj3dvyvj2oevqasvffei3fh089ga0v42wkw9nhrieq98jkq84bkbznk0c1qp838fhk7a8vt90nm04ifopa61fjwzhoh33xdlwbw3xl2si1j245bn83f7ovjn8nraf9z77zdscgs7zd82jqdk2hmb1hnds85jfxe2heny74umzakmvcusi6mjoxobvqzu4rkwpe7b0jkvf0gx49r4pb27o976q3qxpc41mly0ynoqscwbjisxw8xnnsllj0swzzt179vehyf1z3miusbuhudb1zsqce0482ajgmy2fdjlqd2oyi903kn == \2\m\r\q\9\t\0\0\f\b\u\6\l\8\t\s\6\e\r\w\r\k\v\k\n\v\8\7\5\s\f\c\9\6\l\z\f\z\7\2\u\p\n\e\0\j\v\l\g\4\0\d\u\1\6\4\p\p\2\c\f\d\c\q\b\6\q\8\k\b\2\f\y\c\c\w\9\8\7\t\q\q\x\7\n\7\0\e\7\b\t\b\p\y\i\9\f\2\q\m\c\z\d\l\9\v\8\4\x\o\w\g\t\i\j\6\4\w\j\z\6\a\a\9\h\i\9\3\t\p\p\4\n\1\e\i\b\9\r\2\p\w\7\7\y\6\d\x\n\9\g\g\n\f\f\w\o\e\f\v\0\1\x\d\b\p\r\g\j\j\c\9\c\p\d\r\8\2\q\s\0\c\2\5\j\b\n\s\e\h\x\u\u\d\z\i\r\s\u\n\q\n\f\d\7\o\j\3\d\v\y\v\j\2\o\e\v\q\a\s\v\f\f\e\i\3\f\h\0\8\9\g\a\0\v\4\2\w\k\w\9\n\h\r\i\e\q\9\8\j\k\q\8\4\b\k\b\z\n\k\0\c\1\q\p\8\3\8\f\h\k\7\a\8\v\t\9\0\n\m\0\4\i\f\o\p\a\6\1\f\j\w\z\h\o\h\3\3\x\d\l\w\b\w\3\x\l\2\s\i\1\j\2\4\5\b\n\8\3\f\7\o\v\j\n\8\n\r\a\f\9\z\7\7\z\d\s\c\g\s\7\z\d\8\2\j\q\d\k\2\h\m\b\1\h\n\d\s\8\5\j\f\x\e\2\h\e\n\y\7\4\u\m\z\a\k\m\v\c\u\s\i\6\m\j\o\x\o\b\v\q\z\u\4\r\k\w\p\e\7\b\0\j\k\v\f\0\g\x\4\9\r\4\p\b\2\7\o\9\7\6\q\3\q\x\p\c\4\1\m\l\y\0\y\n\o\q\s\c\w\b\j\i\s\x\w\8\x\n\n\s\l\l\j\0\s\w\z\z\t\1\7\9\v\e\h\y\f\1\z\3\m\i\u\s\b\u\h\u\d\b\1\z\s\q\c\e\0\4\8\2\a\j\g\m\y\2\f\d\j\l\q\d\2\o\y\i\9\0\3\k\n ]] 00:06:23.049 09:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.049 09:12:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:23.049 [2024-10-08 09:12:14.533716] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:23.049 [2024-10-08 09:12:14.533823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60615 ] 00:06:23.049 [2024-10-08 09:12:14.668752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.307 [2024-10-08 09:12:14.782753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.307 [2024-10-08 09:12:14.839064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.307  [2024-10-08T09:12:15.249Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.566 00:06:23.566 09:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2mrq9t00fbu6l8ts6erwrkvknv875sfc96lzfz72upne0jvlg40du164pp2cfdcqb6q8kb2fyccw987tqqx7n70e7btbpyi9f2qmczdl9v84xowgtij64wjz6aa9hi93tpp4n1eib9r2pw77y6dxn9ggnffwoefv01xdbprgjjc9cpdr82qs0c25jbnsehxuudzirsunqnfd7oj3dvyvj2oevqasvffei3fh089ga0v42wkw9nhrieq98jkq84bkbznk0c1qp838fhk7a8vt90nm04ifopa61fjwzhoh33xdlwbw3xl2si1j245bn83f7ovjn8nraf9z77zdscgs7zd82jqdk2hmb1hnds85jfxe2heny74umzakmvcusi6mjoxobvqzu4rkwpe7b0jkvf0gx49r4pb27o976q3qxpc41mly0ynoqscwbjisxw8xnnsllj0swzzt179vehyf1z3miusbuhudb1zsqce0482ajgmy2fdjlqd2oyi903kn == \2\m\r\q\9\t\0\0\f\b\u\6\l\8\t\s\6\e\r\w\r\k\v\k\n\v\8\7\5\s\f\c\9\6\l\z\f\z\7\2\u\p\n\e\0\j\v\l\g\4\0\d\u\1\6\4\p\p\2\c\f\d\c\q\b\6\q\8\k\b\2\f\y\c\c\w\9\8\7\t\q\q\x\7\n\7\0\e\7\b\t\b\p\y\i\9\f\2\q\m\c\z\d\l\9\v\8\4\x\o\w\g\t\i\j\6\4\w\j\z\6\a\a\9\h\i\9\3\t\p\p\4\n\1\e\i\b\9\r\2\p\w\7\7\y\6\d\x\n\9\g\g\n\f\f\w\o\e\f\v\0\1\x\d\b\p\r\g\j\j\c\9\c\p\d\r\8\2\q\s\0\c\2\5\j\b\n\s\e\h\x\u\u\d\z\i\r\s\u\n\q\n\f\d\7\o\j\3\d\v\y\v\j\2\o\e\v\q\a\s\v\f\f\e\i\3\f\h\0\8\9\g\a\0\v\4\2\w\k\w\9\n\h\r\i\e\q\9\8\j\k\q\8\4\b\k\b\z\n\k\0\c\1\q\p\8\3\8\f\h\k\7\a\8\v\t\9\0\n\m\0\4\i\f\o\p\a\6\1\f\j\w\z\h\o\h\3\3\x\d\l\w\b\w\3\x\l\2\s\i\1\j\2\4\5\b\n\8\3\f\7\o\v\j\n\8\n\r\a\f\9\z\7\7\z\d\s\c\g\s\7\z\d\8\2\j\q\d\k\2\h\m\b\1\h\n\d\s\8\5\j\f\x\e\2\h\e\n\y\7\4\u\m\z\a\k\m\v\c\u\s\i\6\m\j\o\x\o\b\v\q\z\u\4\r\k\w\p\e\7\b\0\j\k\v\f\0\g\x\4\9\r\4\p\b\2\7\o\9\7\6\q\3\q\x\p\c\4\1\m\l\y\0\y\n\o\q\s\c\w\b\j\i\s\x\w\8\x\n\n\s\l\l\j\0\s\w\z\z\t\1\7\9\v\e\h\y\f\1\z\3\m\i\u\s\b\u\h\u\d\b\1\z\s\q\c\e\0\4\8\2\a\j\g\m\y\2\f\d\j\l\q\d\2\o\y\i\9\0\3\k\n ]] 00:06:23.566 09:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.566 09:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:23.566 [2024-10-08 09:12:15.160978] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:23.566 [2024-10-08 09:12:15.161083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60624 ] 00:06:23.825 [2024-10-08 09:12:15.300696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.825 [2024-10-08 09:12:15.416562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.825 [2024-10-08 09:12:15.471211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.085  [2024-10-08T09:12:15.768Z] Copying: 512/512 [B] (average 125 kBps) 00:06:24.085 00:06:24.085 09:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2mrq9t00fbu6l8ts6erwrkvknv875sfc96lzfz72upne0jvlg40du164pp2cfdcqb6q8kb2fyccw987tqqx7n70e7btbpyi9f2qmczdl9v84xowgtij64wjz6aa9hi93tpp4n1eib9r2pw77y6dxn9ggnffwoefv01xdbprgjjc9cpdr82qs0c25jbnsehxuudzirsunqnfd7oj3dvyvj2oevqasvffei3fh089ga0v42wkw9nhrieq98jkq84bkbznk0c1qp838fhk7a8vt90nm04ifopa61fjwzhoh33xdlwbw3xl2si1j245bn83f7ovjn8nraf9z77zdscgs7zd82jqdk2hmb1hnds85jfxe2heny74umzakmvcusi6mjoxobvqzu4rkwpe7b0jkvf0gx49r4pb27o976q3qxpc41mly0ynoqscwbjisxw8xnnsllj0swzzt179vehyf1z3miusbuhudb1zsqce0482ajgmy2fdjlqd2oyi903kn == \2\m\r\q\9\t\0\0\f\b\u\6\l\8\t\s\6\e\r\w\r\k\v\k\n\v\8\7\5\s\f\c\9\6\l\z\f\z\7\2\u\p\n\e\0\j\v\l\g\4\0\d\u\1\6\4\p\p\2\c\f\d\c\q\b\6\q\8\k\b\2\f\y\c\c\w\9\8\7\t\q\q\x\7\n\7\0\e\7\b\t\b\p\y\i\9\f\2\q\m\c\z\d\l\9\v\8\4\x\o\w\g\t\i\j\6\4\w\j\z\6\a\a\9\h\i\9\3\t\p\p\4\n\1\e\i\b\9\r\2\p\w\7\7\y\6\d\x\n\9\g\g\n\f\f\w\o\e\f\v\0\1\x\d\b\p\r\g\j\j\c\9\c\p\d\r\8\2\q\s\0\c\2\5\j\b\n\s\e\h\x\u\u\d\z\i\r\s\u\n\q\n\f\d\7\o\j\3\d\v\y\v\j\2\o\e\v\q\a\s\v\f\f\e\i\3\f\h\0\8\9\g\a\0\v\4\2\w\k\w\9\n\h\r\i\e\q\9\8\j\k\q\8\4\b\k\b\z\n\k\0\c\1\q\p\8\3\8\f\h\k\7\a\8\v\t\9\0\n\m\0\4\i\f\o\p\a\6\1\f\j\w\z\h\o\h\3\3\x\d\l\w\b\w\3\x\l\2\s\i\1\j\2\4\5\b\n\8\3\f\7\o\v\j\n\8\n\r\a\f\9\z\7\7\z\d\s\c\g\s\7\z\d\8\2\j\q\d\k\2\h\m\b\1\h\n\d\s\8\5\j\f\x\e\2\h\e\n\y\7\4\u\m\z\a\k\m\v\c\u\s\i\6\m\j\o\x\o\b\v\q\z\u\4\r\k\w\p\e\7\b\0\j\k\v\f\0\g\x\4\9\r\4\p\b\2\7\o\9\7\6\q\3\q\x\p\c\4\1\m\l\y\0\y\n\o\q\s\c\w\b\j\i\s\x\w\8\x\n\n\s\l\l\j\0\s\w\z\z\t\1\7\9\v\e\h\y\f\1\z\3\m\i\u\s\b\u\h\u\d\b\1\z\s\q\c\e\0\4\8\2\a\j\g\m\y\2\f\d\j\l\q\d\2\o\y\i\9\0\3\k\n ]] 00:06:24.085 09:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.085 09:12:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:24.344 [2024-10-08 09:12:15.779351] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:24.344 [2024-10-08 09:12:15.779467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60634 ] 00:06:24.344 [2024-10-08 09:12:15.918935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.603 [2024-10-08 09:12:16.052628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.603 [2024-10-08 09:12:16.111168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.603  [2024-10-08T09:12:16.544Z] Copying: 512/512 [B] (average 250 kBps) 00:06:24.861 00:06:24.861 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2mrq9t00fbu6l8ts6erwrkvknv875sfc96lzfz72upne0jvlg40du164pp2cfdcqb6q8kb2fyccw987tqqx7n70e7btbpyi9f2qmczdl9v84xowgtij64wjz6aa9hi93tpp4n1eib9r2pw77y6dxn9ggnffwoefv01xdbprgjjc9cpdr82qs0c25jbnsehxuudzirsunqnfd7oj3dvyvj2oevqasvffei3fh089ga0v42wkw9nhrieq98jkq84bkbznk0c1qp838fhk7a8vt90nm04ifopa61fjwzhoh33xdlwbw3xl2si1j245bn83f7ovjn8nraf9z77zdscgs7zd82jqdk2hmb1hnds85jfxe2heny74umzakmvcusi6mjoxobvqzu4rkwpe7b0jkvf0gx49r4pb27o976q3qxpc41mly0ynoqscwbjisxw8xnnsllj0swzzt179vehyf1z3miusbuhudb1zsqce0482ajgmy2fdjlqd2oyi903kn == \2\m\r\q\9\t\0\0\f\b\u\6\l\8\t\s\6\e\r\w\r\k\v\k\n\v\8\7\5\s\f\c\9\6\l\z\f\z\7\2\u\p\n\e\0\j\v\l\g\4\0\d\u\1\6\4\p\p\2\c\f\d\c\q\b\6\q\8\k\b\2\f\y\c\c\w\9\8\7\t\q\q\x\7\n\7\0\e\7\b\t\b\p\y\i\9\f\2\q\m\c\z\d\l\9\v\8\4\x\o\w\g\t\i\j\6\4\w\j\z\6\a\a\9\h\i\9\3\t\p\p\4\n\1\e\i\b\9\r\2\p\w\7\7\y\6\d\x\n\9\g\g\n\f\f\w\o\e\f\v\0\1\x\d\b\p\r\g\j\j\c\9\c\p\d\r\8\2\q\s\0\c\2\5\j\b\n\s\e\h\x\u\u\d\z\i\r\s\u\n\q\n\f\d\7\o\j\3\d\v\y\v\j\2\o\e\v\q\a\s\v\f\f\e\i\3\f\h\0\8\9\g\a\0\v\4\2\w\k\w\9\n\h\r\i\e\q\9\8\j\k\q\8\4\b\k\b\z\n\k\0\c\1\q\p\8\3\8\f\h\k\7\a\8\v\t\9\0\n\m\0\4\i\f\o\p\a\6\1\f\j\w\z\h\o\h\3\3\x\d\l\w\b\w\3\x\l\2\s\i\1\j\2\4\5\b\n\8\3\f\7\o\v\j\n\8\n\r\a\f\9\z\7\7\z\d\s\c\g\s\7\z\d\8\2\j\q\d\k\2\h\m\b\1\h\n\d\s\8\5\j\f\x\e\2\h\e\n\y\7\4\u\m\z\a\k\m\v\c\u\s\i\6\m\j\o\x\o\b\v\q\z\u\4\r\k\w\p\e\7\b\0\j\k\v\f\0\g\x\4\9\r\4\p\b\2\7\o\9\7\6\q\3\q\x\p\c\4\1\m\l\y\0\y\n\o\q\s\c\w\b\j\i\s\x\w\8\x\n\n\s\l\l\j\0\s\w\z\z\t\1\7\9\v\e\h\y\f\1\z\3\m\i\u\s\b\u\h\u\d\b\1\z\s\q\c\e\0\4\8\2\a\j\g\m\y\2\f\d\j\l\q\d\2\o\y\i\9\0\3\k\n ]] 00:06:24.861 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:24.861 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:24.861 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:24.861 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:24.861 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.861 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:24.861 [2024-10-08 09:12:16.439692] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:24.861 [2024-10-08 09:12:16.439824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60649 ] 00:06:25.119 [2024-10-08 09:12:16.577002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.119 [2024-10-08 09:12:16.686535] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.119 [2024-10-08 09:12:16.740332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.119  [2024-10-08T09:12:17.060Z] Copying: 512/512 [B] (average 500 kBps) 00:06:25.377 00:06:25.378 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p4hmk60hszgcmjfg7xyftspc309jf18mkrwhb6o7rgrc3kpfmhvy09wo35wywn4owek05421s34odq9791wdde04qean68b0yy1pmp63ja7ay0ll29drybvprpa23rsv2613pjd8fmlpo9gkti77af2qvmfm57x0j3svpzdykmealoqgt2h3v82iwxn9ofgkdlrnwi4dchshts2eoc8sbopqlomb59q1dq75k54o4qai60pfewhrhc422n0czns8rjlrbbrrj1f81kd5e3ywgwp2m4pamq14j70wn8xmpzzru4zg0xizt0v1cnr4jcc3qv3oiyzdj91go4ymm8ff2btjak62p2o4sjycmuouimira5i7tms269f2syu8639mws9ojo9soyb6ldso4t86jbn2c93luomao23fffbf4dzx2oa1d8w5kd1u7704jbs0c5ts5q4aff4zzpc5h1x0osxc20in6zno1lmpi0z49i5efj8btpdinnnc63a16135 == \p\4\h\m\k\6\0\h\s\z\g\c\m\j\f\g\7\x\y\f\t\s\p\c\3\0\9\j\f\1\8\m\k\r\w\h\b\6\o\7\r\g\r\c\3\k\p\f\m\h\v\y\0\9\w\o\3\5\w\y\w\n\4\o\w\e\k\0\5\4\2\1\s\3\4\o\d\q\9\7\9\1\w\d\d\e\0\4\q\e\a\n\6\8\b\0\y\y\1\p\m\p\6\3\j\a\7\a\y\0\l\l\2\9\d\r\y\b\v\p\r\p\a\2\3\r\s\v\2\6\1\3\p\j\d\8\f\m\l\p\o\9\g\k\t\i\7\7\a\f\2\q\v\m\f\m\5\7\x\0\j\3\s\v\p\z\d\y\k\m\e\a\l\o\q\g\t\2\h\3\v\8\2\i\w\x\n\9\o\f\g\k\d\l\r\n\w\i\4\d\c\h\s\h\t\s\2\e\o\c\8\s\b\o\p\q\l\o\m\b\5\9\q\1\d\q\7\5\k\5\4\o\4\q\a\i\6\0\p\f\e\w\h\r\h\c\4\2\2\n\0\c\z\n\s\8\r\j\l\r\b\b\r\r\j\1\f\8\1\k\d\5\e\3\y\w\g\w\p\2\m\4\p\a\m\q\1\4\j\7\0\w\n\8\x\m\p\z\z\r\u\4\z\g\0\x\i\z\t\0\v\1\c\n\r\4\j\c\c\3\q\v\3\o\i\y\z\d\j\9\1\g\o\4\y\m\m\8\f\f\2\b\t\j\a\k\6\2\p\2\o\4\s\j\y\c\m\u\o\u\i\m\i\r\a\5\i\7\t\m\s\2\6\9\f\2\s\y\u\8\6\3\9\m\w\s\9\o\j\o\9\s\o\y\b\6\l\d\s\o\4\t\8\6\j\b\n\2\c\9\3\l\u\o\m\a\o\2\3\f\f\f\b\f\4\d\z\x\2\o\a\1\d\8\w\5\k\d\1\u\7\7\0\4\j\b\s\0\c\5\t\s\5\q\4\a\f\f\4\z\z\p\c\5\h\1\x\0\o\s\x\c\2\0\i\n\6\z\n\o\1\l\m\p\i\0\z\4\9\i\5\e\f\j\8\b\t\p\d\i\n\n\n\c\6\3\a\1\6\1\3\5 ]] 00:06:25.378 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.378 09:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:25.378 [2024-10-08 09:12:17.027612] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:25.378 [2024-10-08 09:12:17.027701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60658 ] 00:06:25.636 [2024-10-08 09:12:17.156625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.636 [2024-10-08 09:12:17.257391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.636 [2024-10-08 09:12:17.314386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.895  [2024-10-08T09:12:17.578Z] Copying: 512/512 [B] (average 500 kBps) 00:06:25.895 00:06:25.895 09:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p4hmk60hszgcmjfg7xyftspc309jf18mkrwhb6o7rgrc3kpfmhvy09wo35wywn4owek05421s34odq9791wdde04qean68b0yy1pmp63ja7ay0ll29drybvprpa23rsv2613pjd8fmlpo9gkti77af2qvmfm57x0j3svpzdykmealoqgt2h3v82iwxn9ofgkdlrnwi4dchshts2eoc8sbopqlomb59q1dq75k54o4qai60pfewhrhc422n0czns8rjlrbbrrj1f81kd5e3ywgwp2m4pamq14j70wn8xmpzzru4zg0xizt0v1cnr4jcc3qv3oiyzdj91go4ymm8ff2btjak62p2o4sjycmuouimira5i7tms269f2syu8639mws9ojo9soyb6ldso4t86jbn2c93luomao23fffbf4dzx2oa1d8w5kd1u7704jbs0c5ts5q4aff4zzpc5h1x0osxc20in6zno1lmpi0z49i5efj8btpdinnnc63a16135 == \p\4\h\m\k\6\0\h\s\z\g\c\m\j\f\g\7\x\y\f\t\s\p\c\3\0\9\j\f\1\8\m\k\r\w\h\b\6\o\7\r\g\r\c\3\k\p\f\m\h\v\y\0\9\w\o\3\5\w\y\w\n\4\o\w\e\k\0\5\4\2\1\s\3\4\o\d\q\9\7\9\1\w\d\d\e\0\4\q\e\a\n\6\8\b\0\y\y\1\p\m\p\6\3\j\a\7\a\y\0\l\l\2\9\d\r\y\b\v\p\r\p\a\2\3\r\s\v\2\6\1\3\p\j\d\8\f\m\l\p\o\9\g\k\t\i\7\7\a\f\2\q\v\m\f\m\5\7\x\0\j\3\s\v\p\z\d\y\k\m\e\a\l\o\q\g\t\2\h\3\v\8\2\i\w\x\n\9\o\f\g\k\d\l\r\n\w\i\4\d\c\h\s\h\t\s\2\e\o\c\8\s\b\o\p\q\l\o\m\b\5\9\q\1\d\q\7\5\k\5\4\o\4\q\a\i\6\0\p\f\e\w\h\r\h\c\4\2\2\n\0\c\z\n\s\8\r\j\l\r\b\b\r\r\j\1\f\8\1\k\d\5\e\3\y\w\g\w\p\2\m\4\p\a\m\q\1\4\j\7\0\w\n\8\x\m\p\z\z\r\u\4\z\g\0\x\i\z\t\0\v\1\c\n\r\4\j\c\c\3\q\v\3\o\i\y\z\d\j\9\1\g\o\4\y\m\m\8\f\f\2\b\t\j\a\k\6\2\p\2\o\4\s\j\y\c\m\u\o\u\i\m\i\r\a\5\i\7\t\m\s\2\6\9\f\2\s\y\u\8\6\3\9\m\w\s\9\o\j\o\9\s\o\y\b\6\l\d\s\o\4\t\8\6\j\b\n\2\c\9\3\l\u\o\m\a\o\2\3\f\f\f\b\f\4\d\z\x\2\o\a\1\d\8\w\5\k\d\1\u\7\7\0\4\j\b\s\0\c\5\t\s\5\q\4\a\f\f\4\z\z\p\c\5\h\1\x\0\o\s\x\c\2\0\i\n\6\z\n\o\1\l\m\p\i\0\z\4\9\i\5\e\f\j\8\b\t\p\d\i\n\n\n\c\6\3\a\1\6\1\3\5 ]] 00:06:25.895 09:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.155 09:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:26.155 [2024-10-08 09:12:17.633649] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:26.155 [2024-10-08 09:12:17.633839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60668 ] 00:06:26.155 [2024-10-08 09:12:17.774796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.427 [2024-10-08 09:12:17.875415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.427 [2024-10-08 09:12:17.927539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.427  [2024-10-08T09:12:18.368Z] Copying: 512/512 [B] (average 250 kBps) 00:06:26.685 00:06:26.685 09:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p4hmk60hszgcmjfg7xyftspc309jf18mkrwhb6o7rgrc3kpfmhvy09wo35wywn4owek05421s34odq9791wdde04qean68b0yy1pmp63ja7ay0ll29drybvprpa23rsv2613pjd8fmlpo9gkti77af2qvmfm57x0j3svpzdykmealoqgt2h3v82iwxn9ofgkdlrnwi4dchshts2eoc8sbopqlomb59q1dq75k54o4qai60pfewhrhc422n0czns8rjlrbbrrj1f81kd5e3ywgwp2m4pamq14j70wn8xmpzzru4zg0xizt0v1cnr4jcc3qv3oiyzdj91go4ymm8ff2btjak62p2o4sjycmuouimira5i7tms269f2syu8639mws9ojo9soyb6ldso4t86jbn2c93luomao23fffbf4dzx2oa1d8w5kd1u7704jbs0c5ts5q4aff4zzpc5h1x0osxc20in6zno1lmpi0z49i5efj8btpdinnnc63a16135 == \p\4\h\m\k\6\0\h\s\z\g\c\m\j\f\g\7\x\y\f\t\s\p\c\3\0\9\j\f\1\8\m\k\r\w\h\b\6\o\7\r\g\r\c\3\k\p\f\m\h\v\y\0\9\w\o\3\5\w\y\w\n\4\o\w\e\k\0\5\4\2\1\s\3\4\o\d\q\9\7\9\1\w\d\d\e\0\4\q\e\a\n\6\8\b\0\y\y\1\p\m\p\6\3\j\a\7\a\y\0\l\l\2\9\d\r\y\b\v\p\r\p\a\2\3\r\s\v\2\6\1\3\p\j\d\8\f\m\l\p\o\9\g\k\t\i\7\7\a\f\2\q\v\m\f\m\5\7\x\0\j\3\s\v\p\z\d\y\k\m\e\a\l\o\q\g\t\2\h\3\v\8\2\i\w\x\n\9\o\f\g\k\d\l\r\n\w\i\4\d\c\h\s\h\t\s\2\e\o\c\8\s\b\o\p\q\l\o\m\b\5\9\q\1\d\q\7\5\k\5\4\o\4\q\a\i\6\0\p\f\e\w\h\r\h\c\4\2\2\n\0\c\z\n\s\8\r\j\l\r\b\b\r\r\j\1\f\8\1\k\d\5\e\3\y\w\g\w\p\2\m\4\p\a\m\q\1\4\j\7\0\w\n\8\x\m\p\z\z\r\u\4\z\g\0\x\i\z\t\0\v\1\c\n\r\4\j\c\c\3\q\v\3\o\i\y\z\d\j\9\1\g\o\4\y\m\m\8\f\f\2\b\t\j\a\k\6\2\p\2\o\4\s\j\y\c\m\u\o\u\i\m\i\r\a\5\i\7\t\m\s\2\6\9\f\2\s\y\u\8\6\3\9\m\w\s\9\o\j\o\9\s\o\y\b\6\l\d\s\o\4\t\8\6\j\b\n\2\c\9\3\l\u\o\m\a\o\2\3\f\f\f\b\f\4\d\z\x\2\o\a\1\d\8\w\5\k\d\1\u\7\7\0\4\j\b\s\0\c\5\t\s\5\q\4\a\f\f\4\z\z\p\c\5\h\1\x\0\o\s\x\c\2\0\i\n\6\z\n\o\1\l\m\p\i\0\z\4\9\i\5\e\f\j\8\b\t\p\d\i\n\n\n\c\6\3\a\1\6\1\3\5 ]] 00:06:26.685 09:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.685 09:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:26.685 [2024-10-08 09:12:18.237316] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:26.685 [2024-10-08 09:12:18.237460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60677 ] 00:06:26.943 [2024-10-08 09:12:18.375091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.943 [2024-10-08 09:12:18.473380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.943 [2024-10-08 09:12:18.526183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.943  [2024-10-08T09:12:18.883Z] Copying: 512/512 [B] (average 500 kBps) 00:06:27.200 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p4hmk60hszgcmjfg7xyftspc309jf18mkrwhb6o7rgrc3kpfmhvy09wo35wywn4owek05421s34odq9791wdde04qean68b0yy1pmp63ja7ay0ll29drybvprpa23rsv2613pjd8fmlpo9gkti77af2qvmfm57x0j3svpzdykmealoqgt2h3v82iwxn9ofgkdlrnwi4dchshts2eoc8sbopqlomb59q1dq75k54o4qai60pfewhrhc422n0czns8rjlrbbrrj1f81kd5e3ywgwp2m4pamq14j70wn8xmpzzru4zg0xizt0v1cnr4jcc3qv3oiyzdj91go4ymm8ff2btjak62p2o4sjycmuouimira5i7tms269f2syu8639mws9ojo9soyb6ldso4t86jbn2c93luomao23fffbf4dzx2oa1d8w5kd1u7704jbs0c5ts5q4aff4zzpc5h1x0osxc20in6zno1lmpi0z49i5efj8btpdinnnc63a16135 == \p\4\h\m\k\6\0\h\s\z\g\c\m\j\f\g\7\x\y\f\t\s\p\c\3\0\9\j\f\1\8\m\k\r\w\h\b\6\o\7\r\g\r\c\3\k\p\f\m\h\v\y\0\9\w\o\3\5\w\y\w\n\4\o\w\e\k\0\5\4\2\1\s\3\4\o\d\q\9\7\9\1\w\d\d\e\0\4\q\e\a\n\6\8\b\0\y\y\1\p\m\p\6\3\j\a\7\a\y\0\l\l\2\9\d\r\y\b\v\p\r\p\a\2\3\r\s\v\2\6\1\3\p\j\d\8\f\m\l\p\o\9\g\k\t\i\7\7\a\f\2\q\v\m\f\m\5\7\x\0\j\3\s\v\p\z\d\y\k\m\e\a\l\o\q\g\t\2\h\3\v\8\2\i\w\x\n\9\o\f\g\k\d\l\r\n\w\i\4\d\c\h\s\h\t\s\2\e\o\c\8\s\b\o\p\q\l\o\m\b\5\9\q\1\d\q\7\5\k\5\4\o\4\q\a\i\6\0\p\f\e\w\h\r\h\c\4\2\2\n\0\c\z\n\s\8\r\j\l\r\b\b\r\r\j\1\f\8\1\k\d\5\e\3\y\w\g\w\p\2\m\4\p\a\m\q\1\4\j\7\0\w\n\8\x\m\p\z\z\r\u\4\z\g\0\x\i\z\t\0\v\1\c\n\r\4\j\c\c\3\q\v\3\o\i\y\z\d\j\9\1\g\o\4\y\m\m\8\f\f\2\b\t\j\a\k\6\2\p\2\o\4\s\j\y\c\m\u\o\u\i\m\i\r\a\5\i\7\t\m\s\2\6\9\f\2\s\y\u\8\6\3\9\m\w\s\9\o\j\o\9\s\o\y\b\6\l\d\s\o\4\t\8\6\j\b\n\2\c\9\3\l\u\o\m\a\o\2\3\f\f\f\b\f\4\d\z\x\2\o\a\1\d\8\w\5\k\d\1\u\7\7\0\4\j\b\s\0\c\5\t\s\5\q\4\a\f\f\4\z\z\p\c\5\h\1\x\0\o\s\x\c\2\0\i\n\6\z\n\o\1\l\m\p\i\0\z\4\9\i\5\e\f\j\8\b\t\p\d\i\n\n\n\c\6\3\a\1\6\1\3\5 ]] 00:06:27.200 00:06:27.200 real 0m4.909s 00:06:27.200 user 0m2.805s 00:06:27.200 sys 0m2.306s 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.200 ************************************ 00:06:27.200 END TEST dd_flags_misc 00:06:27.200 ************************************ 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:27.200 * Second test run, disabling liburing, forcing AIO 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.200 ************************************ 00:06:27.200 START TEST dd_flag_append_forced_aio 00:06:27.200 ************************************ 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=r7m1zopd08js6pqjbijgneh3trjttxsf 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=ict6mu3az33cer3wdq4ebgr6609agqzn 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s r7m1zopd08js6pqjbijgneh3trjttxsf 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s ict6mu3az33cer3wdq4ebgr6609agqzn 00:06:27.200 09:12:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:27.200 [2024-10-08 09:12:18.875448] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:27.200 [2024-10-08 09:12:18.875564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:06:27.458 [2024-10-08 09:12:19.014167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.458 [2024-10-08 09:12:19.122283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.716 [2024-10-08 09:12:19.178980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.716  [2024-10-08T09:12:19.657Z] Copying: 32/32 [B] (average 31 kBps) 00:06:27.974 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ ict6mu3az33cer3wdq4ebgr6609agqznr7m1zopd08js6pqjbijgneh3trjttxsf == \i\c\t\6\m\u\3\a\z\3\3\c\e\r\3\w\d\q\4\e\b\g\r\6\6\0\9\a\g\q\z\n\r\7\m\1\z\o\p\d\0\8\j\s\6\p\q\j\b\i\j\g\n\e\h\3\t\r\j\t\t\x\s\f ]] 00:06:27.974 00:06:27.974 real 0m0.654s 00:06:27.974 user 0m0.367s 00:06:27.974 sys 0m0.159s 00:06:27.974 ************************************ 00:06:27.974 END TEST dd_flag_append_forced_aio 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:27.974 ************************************ 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.974 ************************************ 00:06:27.974 START TEST dd_flag_directory_forced_aio 00:06:27.974 ************************************ 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.974 09:12:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.974 [2024-10-08 09:12:19.569563] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:27.974 [2024-10-08 09:12:19.569673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60738 ] 00:06:28.234 [2024-10-08 09:12:19.705303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.234 [2024-10-08 09:12:19.783258] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.234 [2024-10-08 09:12:19.836288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.234 [2024-10-08 09:12:19.871140] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:28.234 [2024-10-08 09:12:19.871227] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:28.234 [2024-10-08 09:12:19.871240] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.492 [2024-10-08 09:12:19.987818] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.492 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:28.492 [2024-10-08 09:12:20.160034] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:28.492 [2024-10-08 09:12:20.160104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60747 ] 00:06:28.751 [2024-10-08 09:12:20.290435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.751 [2024-10-08 09:12:20.410248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.010 [2024-10-08 09:12:20.471072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.010 [2024-10-08 09:12:20.510969] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:29.010 [2024-10-08 09:12:20.511036] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:29.010 [2024-10-08 09:12:20.511060] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.010 [2024-10-08 09:12:20.637726] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.269 00:06:29.269 real 0m1.230s 00:06:29.269 user 0m0.714s 00:06:29.269 sys 0m0.305s 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:29.269 ************************************ 00:06:29.269 END TEST dd_flag_directory_forced_aio 00:06:29.269 ************************************ 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:29.269 ************************************ 00:06:29.269 START TEST dd_flag_nofollow_forced_aio 00:06:29.269 ************************************ 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.269 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:29.270 09:12:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.270 [2024-10-08 09:12:20.895372] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:29.270 [2024-10-08 09:12:20.895536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60776 ] 00:06:29.528 [2024-10-08 09:12:21.046787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.529 [2024-10-08 09:12:21.162385] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.787 [2024-10-08 09:12:21.216732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.787 [2024-10-08 09:12:21.253542] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:29.787 [2024-10-08 09:12:21.253611] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:29.787 [2024-10-08 09:12:21.253626] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.787 [2024-10-08 09:12:21.372875] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.046 09:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.046 [2024-10-08 09:12:21.560189] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:30.046 [2024-10-08 09:12:21.560292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60791 ] 00:06:30.046 [2024-10-08 09:12:21.694630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.305 [2024-10-08 09:12:21.814861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.305 [2024-10-08 09:12:21.871997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.305 [2024-10-08 09:12:21.908766] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:30.305 [2024-10-08 09:12:21.908821] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:30.305 [2024-10-08 09:12:21.908837] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.565 [2024-10-08 09:12:22.029483] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:30.565 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.565 [2024-10-08 09:12:22.199448] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:30.565 [2024-10-08 09:12:22.199548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60793 ] 00:06:30.825 [2024-10-08 09:12:22.338095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.825 [2024-10-08 09:12:22.435907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.825 [2024-10-08 09:12:22.491025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.083  [2024-10-08T09:12:23.025Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.342 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ dt5av8m8blokitrfebwlzhpvbv5t9cgsr7ylflgw2c3gpke943ny4l7sg3ipqrz7lk38l7chk61vo00xdg5h228zr9gvz3pqmu3agw6olpih1k5yjqdysvlgtsaqyj1fpbu87t3cxfp1n2xcgday9nggdnsh50rruvq6ut72wthms86hjq0mbpytr6rkto47nbjivjoql4u79b3veo8vi8tipp7cksubdx1edtsej8kuf15kwwdba05vgzam4pakcha3wv9v5q9xl06gib1spbyv8x2llrv5zukrhmigzaronscs5rypz1w40e8yi942nucf488s41figp62pgxktwuu65zo6zev0hfrdaxbi2p5t5oxc7vvq9fbpygfwgwgxu7onwcg4flxuy5781ajzd6snrhj7ix40jx25jj21lj888vbovv700k3jzznz5drgl519r4szoorxhtvsl83he7qvck883ajlm23yr5qoottg46v9l6n26exvbo63a69 == \d\t\5\a\v\8\m\8\b\l\o\k\i\t\r\f\e\b\w\l\z\h\p\v\b\v\5\t\9\c\g\s\r\7\y\l\f\l\g\w\2\c\3\g\p\k\e\9\4\3\n\y\4\l\7\s\g\3\i\p\q\r\z\7\l\k\3\8\l\7\c\h\k\6\1\v\o\0\0\x\d\g\5\h\2\2\8\z\r\9\g\v\z\3\p\q\m\u\3\a\g\w\6\o\l\p\i\h\1\k\5\y\j\q\d\y\s\v\l\g\t\s\a\q\y\j\1\f\p\b\u\8\7\t\3\c\x\f\p\1\n\2\x\c\g\d\a\y\9\n\g\g\d\n\s\h\5\0\r\r\u\v\q\6\u\t\7\2\w\t\h\m\s\8\6\h\j\q\0\m\b\p\y\t\r\6\r\k\t\o\4\7\n\b\j\i\v\j\o\q\l\4\u\7\9\b\3\v\e\o\8\v\i\8\t\i\p\p\7\c\k\s\u\b\d\x\1\e\d\t\s\e\j\8\k\u\f\1\5\k\w\w\d\b\a\0\5\v\g\z\a\m\4\p\a\k\c\h\a\3\w\v\9\v\5\q\9\x\l\0\6\g\i\b\1\s\p\b\y\v\8\x\2\l\l\r\v\5\z\u\k\r\h\m\i\g\z\a\r\o\n\s\c\s\5\r\y\p\z\1\w\4\0\e\8\y\i\9\4\2\n\u\c\f\4\8\8\s\4\1\f\i\g\p\6\2\p\g\x\k\t\w\u\u\6\5\z\o\6\z\e\v\0\h\f\r\d\a\x\b\i\2\p\5\t\5\o\x\c\7\v\v\q\9\f\b\p\y\g\f\w\g\w\g\x\u\7\o\n\w\c\g\4\f\l\x\u\y\5\7\8\1\a\j\z\d\6\s\n\r\h\j\7\i\x\4\0\j\x\2\5\j\j\2\1\l\j\8\8\8\v\b\o\v\v\7\0\0\k\3\j\z\z\n\z\5\d\r\g\l\5\1\9\r\4\s\z\o\o\r\x\h\t\v\s\l\8\3\h\e\7\q\v\c\k\8\8\3\a\j\l\m\2\3\y\r\5\q\o\o\t\t\g\4\6\v\9\l\6\n\2\6\e\x\v\b\o\6\3\a\6\9 ]] 00:06:31.343 00:06:31.343 real 0m1.989s 00:06:31.343 user 0m1.165s 00:06:31.343 sys 0m0.490s 00:06:31.343 ************************************ 00:06:31.343 END TEST dd_flag_nofollow_forced_aio 00:06:31.343 ************************************ 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.343 ************************************ 00:06:31.343 START TEST dd_flag_noatime_forced_aio 00:06:31.343 ************************************ 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1728378742 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1728378742 00:06:31.343 09:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:32.282 09:12:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.282 [2024-10-08 09:12:23.925486] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:32.282 [2024-10-08 09:12:23.925635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60839 ] 00:06:32.540 [2024-10-08 09:12:24.063573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.540 [2024-10-08 09:12:24.176170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.799 [2024-10-08 09:12:24.230648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.799  [2024-10-08T09:12:24.740Z] Copying: 512/512 [B] (average 500 kBps) 00:06:33.057 00:06:33.057 09:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.057 09:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1728378742 )) 00:06:33.057 09:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.057 09:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1728378742 )) 00:06:33.057 09:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.057 [2024-10-08 09:12:24.635181] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:33.057 [2024-10-08 09:12:24.635328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60859 ] 00:06:33.316 [2024-10-08 09:12:24.776158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.316 [2024-10-08 09:12:24.899138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.316 [2024-10-08 09:12:24.955021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.316  [2024-10-08T09:12:25.567Z] Copying: 512/512 [B] (average 500 kBps) 00:06:33.884 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1728378744 )) 00:06:33.884 00:06:33.884 real 0m2.439s 00:06:33.884 user 0m0.856s 00:06:33.884 sys 0m0.338s 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.884 ************************************ 00:06:33.884 END TEST dd_flag_noatime_forced_aio 00:06:33.884 ************************************ 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.884 ************************************ 00:06:33.884 START TEST dd_flags_misc_forced_aio 00:06:33.884 ************************************ 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.884 09:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:33.884 [2024-10-08 09:12:25.403402] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:33.884 [2024-10-08 09:12:25.403521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60884 ] 00:06:33.884 [2024-10-08 09:12:25.543448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.142 [2024-10-08 09:12:25.668908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.142 [2024-10-08 09:12:25.726362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.142  [2024-10-08T09:12:26.084Z] Copying: 512/512 [B] (average 500 kBps) 00:06:34.401 00:06:34.401 09:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h7j7f41scjz06g4d4bxtkc1t59rivivsy2s696ubni65y2iqdcl1ikeljul4kc7ncmci7d00um6ohr0r4zir371arwbsveeru0mxket0uad88y3yskjjha2vyzcskd59y6hpa9z0k1yesucf0zstoa10bmmw45o2wuovcze0lusqqhwwvc5jvypxdj8n1natk8awf8p594rpgheffb0388yff68clb8qi6hpsti5jkf37n7iwry1tcuog9ww8jyrkxe1furdo948y2qamzli60lfjpr3ee0lm1agmoggp0v81q8fo4agfcxnygdw9u72et56c1ccdkgtsiaswprho46lpuovnaiiku9ukp15vr0muegse44335s40tvewyv3ixxouuxg577igz2hjvws2fnlw07f5q4ui6kqioaoc0qxl29j2frdy24str0k07j0v4glw73cyx8nyzsgc1np25qjex9q1qnnhows690cbejwbp36w6v0qes0foip5pip == \h\7\j\7\f\4\1\s\c\j\z\0\6\g\4\d\4\b\x\t\k\c\1\t\5\9\r\i\v\i\v\s\y\2\s\6\9\6\u\b\n\i\6\5\y\2\i\q\d\c\l\1\i\k\e\l\j\u\l\4\k\c\7\n\c\m\c\i\7\d\0\0\u\m\6\o\h\r\0\r\4\z\i\r\3\7\1\a\r\w\b\s\v\e\e\r\u\0\m\x\k\e\t\0\u\a\d\8\8\y\3\y\s\k\j\j\h\a\2\v\y\z\c\s\k\d\5\9\y\6\h\p\a\9\z\0\k\1\y\e\s\u\c\f\0\z\s\t\o\a\1\0\b\m\m\w\4\5\o\2\w\u\o\v\c\z\e\0\l\u\s\q\q\h\w\w\v\c\5\j\v\y\p\x\d\j\8\n\1\n\a\t\k\8\a\w\f\8\p\5\9\4\r\p\g\h\e\f\f\b\0\3\8\8\y\f\f\6\8\c\l\b\8\q\i\6\h\p\s\t\i\5\j\k\f\3\7\n\7\i\w\r\y\1\t\c\u\o\g\9\w\w\8\j\y\r\k\x\e\1\f\u\r\d\o\9\4\8\y\2\q\a\m\z\l\i\6\0\l\f\j\p\r\3\e\e\0\l\m\1\a\g\m\o\g\g\p\0\v\8\1\q\8\f\o\4\a\g\f\c\x\n\y\g\d\w\9\u\7\2\e\t\5\6\c\1\c\c\d\k\g\t\s\i\a\s\w\p\r\h\o\4\6\l\p\u\o\v\n\a\i\i\k\u\9\u\k\p\1\5\v\r\0\m\u\e\g\s\e\4\4\3\3\5\s\4\0\t\v\e\w\y\v\3\i\x\x\o\u\u\x\g\5\7\7\i\g\z\2\h\j\v\w\s\2\f\n\l\w\0\7\f\5\q\4\u\i\6\k\q\i\o\a\o\c\0\q\x\l\2\9\j\2\f\r\d\y\2\4\s\t\r\0\k\0\7\j\0\v\4\g\l\w\7\3\c\y\x\8\n\y\z\s\g\c\1\n\p\2\5\q\j\e\x\9\q\1\q\n\n\h\o\w\s\6\9\0\c\b\e\j\w\b\p\3\6\w\6\v\0\q\e\s\0\f\o\i\p\5\p\i\p ]] 00:06:34.401 09:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.401 09:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:34.401 [2024-10-08 09:12:26.074462] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:34.402 [2024-10-08 09:12:26.074585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60893 ] 00:06:34.661 [2024-10-08 09:12:26.212416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.661 [2024-10-08 09:12:26.313614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.922 [2024-10-08 09:12:26.369395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.922  [2024-10-08T09:12:26.864Z] Copying: 512/512 [B] (average 500 kBps) 00:06:35.181 00:06:35.181 09:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h7j7f41scjz06g4d4bxtkc1t59rivivsy2s696ubni65y2iqdcl1ikeljul4kc7ncmci7d00um6ohr0r4zir371arwbsveeru0mxket0uad88y3yskjjha2vyzcskd59y6hpa9z0k1yesucf0zstoa10bmmw45o2wuovcze0lusqqhwwvc5jvypxdj8n1natk8awf8p594rpgheffb0388yff68clb8qi6hpsti5jkf37n7iwry1tcuog9ww8jyrkxe1furdo948y2qamzli60lfjpr3ee0lm1agmoggp0v81q8fo4agfcxnygdw9u72et56c1ccdkgtsiaswprho46lpuovnaiiku9ukp15vr0muegse44335s40tvewyv3ixxouuxg577igz2hjvws2fnlw07f5q4ui6kqioaoc0qxl29j2frdy24str0k07j0v4glw73cyx8nyzsgc1np25qjex9q1qnnhows690cbejwbp36w6v0qes0foip5pip == \h\7\j\7\f\4\1\s\c\j\z\0\6\g\4\d\4\b\x\t\k\c\1\t\5\9\r\i\v\i\v\s\y\2\s\6\9\6\u\b\n\i\6\5\y\2\i\q\d\c\l\1\i\k\e\l\j\u\l\4\k\c\7\n\c\m\c\i\7\d\0\0\u\m\6\o\h\r\0\r\4\z\i\r\3\7\1\a\r\w\b\s\v\e\e\r\u\0\m\x\k\e\t\0\u\a\d\8\8\y\3\y\s\k\j\j\h\a\2\v\y\z\c\s\k\d\5\9\y\6\h\p\a\9\z\0\k\1\y\e\s\u\c\f\0\z\s\t\o\a\1\0\b\m\m\w\4\5\o\2\w\u\o\v\c\z\e\0\l\u\s\q\q\h\w\w\v\c\5\j\v\y\p\x\d\j\8\n\1\n\a\t\k\8\a\w\f\8\p\5\9\4\r\p\g\h\e\f\f\b\0\3\8\8\y\f\f\6\8\c\l\b\8\q\i\6\h\p\s\t\i\5\j\k\f\3\7\n\7\i\w\r\y\1\t\c\u\o\g\9\w\w\8\j\y\r\k\x\e\1\f\u\r\d\o\9\4\8\y\2\q\a\m\z\l\i\6\0\l\f\j\p\r\3\e\e\0\l\m\1\a\g\m\o\g\g\p\0\v\8\1\q\8\f\o\4\a\g\f\c\x\n\y\g\d\w\9\u\7\2\e\t\5\6\c\1\c\c\d\k\g\t\s\i\a\s\w\p\r\h\o\4\6\l\p\u\o\v\n\a\i\i\k\u\9\u\k\p\1\5\v\r\0\m\u\e\g\s\e\4\4\3\3\5\s\4\0\t\v\e\w\y\v\3\i\x\x\o\u\u\x\g\5\7\7\i\g\z\2\h\j\v\w\s\2\f\n\l\w\0\7\f\5\q\4\u\i\6\k\q\i\o\a\o\c\0\q\x\l\2\9\j\2\f\r\d\y\2\4\s\t\r\0\k\0\7\j\0\v\4\g\l\w\7\3\c\y\x\8\n\y\z\s\g\c\1\n\p\2\5\q\j\e\x\9\q\1\q\n\n\h\o\w\s\6\9\0\c\b\e\j\w\b\p\3\6\w\6\v\0\q\e\s\0\f\o\i\p\5\p\i\p ]] 00:06:35.181 09:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:35.181 09:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:35.181 [2024-10-08 09:12:26.745920] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:35.181 [2024-10-08 09:12:26.746090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60906 ] 00:06:35.440 [2024-10-08 09:12:26.893622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.440 [2024-10-08 09:12:27.029890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.440 [2024-10-08 09:12:27.088390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.699  [2024-10-08T09:12:27.641Z] Copying: 512/512 [B] (average 250 kBps) 00:06:35.958 00:06:35.958 09:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h7j7f41scjz06g4d4bxtkc1t59rivivsy2s696ubni65y2iqdcl1ikeljul4kc7ncmci7d00um6ohr0r4zir371arwbsveeru0mxket0uad88y3yskjjha2vyzcskd59y6hpa9z0k1yesucf0zstoa10bmmw45o2wuovcze0lusqqhwwvc5jvypxdj8n1natk8awf8p594rpgheffb0388yff68clb8qi6hpsti5jkf37n7iwry1tcuog9ww8jyrkxe1furdo948y2qamzli60lfjpr3ee0lm1agmoggp0v81q8fo4agfcxnygdw9u72et56c1ccdkgtsiaswprho46lpuovnaiiku9ukp15vr0muegse44335s40tvewyv3ixxouuxg577igz2hjvws2fnlw07f5q4ui6kqioaoc0qxl29j2frdy24str0k07j0v4glw73cyx8nyzsgc1np25qjex9q1qnnhows690cbejwbp36w6v0qes0foip5pip == \h\7\j\7\f\4\1\s\c\j\z\0\6\g\4\d\4\b\x\t\k\c\1\t\5\9\r\i\v\i\v\s\y\2\s\6\9\6\u\b\n\i\6\5\y\2\i\q\d\c\l\1\i\k\e\l\j\u\l\4\k\c\7\n\c\m\c\i\7\d\0\0\u\m\6\o\h\r\0\r\4\z\i\r\3\7\1\a\r\w\b\s\v\e\e\r\u\0\m\x\k\e\t\0\u\a\d\8\8\y\3\y\s\k\j\j\h\a\2\v\y\z\c\s\k\d\5\9\y\6\h\p\a\9\z\0\k\1\y\e\s\u\c\f\0\z\s\t\o\a\1\0\b\m\m\w\4\5\o\2\w\u\o\v\c\z\e\0\l\u\s\q\q\h\w\w\v\c\5\j\v\y\p\x\d\j\8\n\1\n\a\t\k\8\a\w\f\8\p\5\9\4\r\p\g\h\e\f\f\b\0\3\8\8\y\f\f\6\8\c\l\b\8\q\i\6\h\p\s\t\i\5\j\k\f\3\7\n\7\i\w\r\y\1\t\c\u\o\g\9\w\w\8\j\y\r\k\x\e\1\f\u\r\d\o\9\4\8\y\2\q\a\m\z\l\i\6\0\l\f\j\p\r\3\e\e\0\l\m\1\a\g\m\o\g\g\p\0\v\8\1\q\8\f\o\4\a\g\f\c\x\n\y\g\d\w\9\u\7\2\e\t\5\6\c\1\c\c\d\k\g\t\s\i\a\s\w\p\r\h\o\4\6\l\p\u\o\v\n\a\i\i\k\u\9\u\k\p\1\5\v\r\0\m\u\e\g\s\e\4\4\3\3\5\s\4\0\t\v\e\w\y\v\3\i\x\x\o\u\u\x\g\5\7\7\i\g\z\2\h\j\v\w\s\2\f\n\l\w\0\7\f\5\q\4\u\i\6\k\q\i\o\a\o\c\0\q\x\l\2\9\j\2\f\r\d\y\2\4\s\t\r\0\k\0\7\j\0\v\4\g\l\w\7\3\c\y\x\8\n\y\z\s\g\c\1\n\p\2\5\q\j\e\x\9\q\1\q\n\n\h\o\w\s\6\9\0\c\b\e\j\w\b\p\3\6\w\6\v\0\q\e\s\0\f\o\i\p\5\p\i\p ]] 00:06:35.958 09:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:35.958 09:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:35.958 [2024-10-08 09:12:27.470328] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:35.958 [2024-10-08 09:12:27.471303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60908 ] 00:06:35.958 [2024-10-08 09:12:27.611303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.216 [2024-10-08 09:12:27.731458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.216 [2024-10-08 09:12:27.787217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.216  [2024-10-08T09:12:28.158Z] Copying: 512/512 [B] (average 250 kBps) 00:06:36.475 00:06:36.475 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h7j7f41scjz06g4d4bxtkc1t59rivivsy2s696ubni65y2iqdcl1ikeljul4kc7ncmci7d00um6ohr0r4zir371arwbsveeru0mxket0uad88y3yskjjha2vyzcskd59y6hpa9z0k1yesucf0zstoa10bmmw45o2wuovcze0lusqqhwwvc5jvypxdj8n1natk8awf8p594rpgheffb0388yff68clb8qi6hpsti5jkf37n7iwry1tcuog9ww8jyrkxe1furdo948y2qamzli60lfjpr3ee0lm1agmoggp0v81q8fo4agfcxnygdw9u72et56c1ccdkgtsiaswprho46lpuovnaiiku9ukp15vr0muegse44335s40tvewyv3ixxouuxg577igz2hjvws2fnlw07f5q4ui6kqioaoc0qxl29j2frdy24str0k07j0v4glw73cyx8nyzsgc1np25qjex9q1qnnhows690cbejwbp36w6v0qes0foip5pip == \h\7\j\7\f\4\1\s\c\j\z\0\6\g\4\d\4\b\x\t\k\c\1\t\5\9\r\i\v\i\v\s\y\2\s\6\9\6\u\b\n\i\6\5\y\2\i\q\d\c\l\1\i\k\e\l\j\u\l\4\k\c\7\n\c\m\c\i\7\d\0\0\u\m\6\o\h\r\0\r\4\z\i\r\3\7\1\a\r\w\b\s\v\e\e\r\u\0\m\x\k\e\t\0\u\a\d\8\8\y\3\y\s\k\j\j\h\a\2\v\y\z\c\s\k\d\5\9\y\6\h\p\a\9\z\0\k\1\y\e\s\u\c\f\0\z\s\t\o\a\1\0\b\m\m\w\4\5\o\2\w\u\o\v\c\z\e\0\l\u\s\q\q\h\w\w\v\c\5\j\v\y\p\x\d\j\8\n\1\n\a\t\k\8\a\w\f\8\p\5\9\4\r\p\g\h\e\f\f\b\0\3\8\8\y\f\f\6\8\c\l\b\8\q\i\6\h\p\s\t\i\5\j\k\f\3\7\n\7\i\w\r\y\1\t\c\u\o\g\9\w\w\8\j\y\r\k\x\e\1\f\u\r\d\o\9\4\8\y\2\q\a\m\z\l\i\6\0\l\f\j\p\r\3\e\e\0\l\m\1\a\g\m\o\g\g\p\0\v\8\1\q\8\f\o\4\a\g\f\c\x\n\y\g\d\w\9\u\7\2\e\t\5\6\c\1\c\c\d\k\g\t\s\i\a\s\w\p\r\h\o\4\6\l\p\u\o\v\n\a\i\i\k\u\9\u\k\p\1\5\v\r\0\m\u\e\g\s\e\4\4\3\3\5\s\4\0\t\v\e\w\y\v\3\i\x\x\o\u\u\x\g\5\7\7\i\g\z\2\h\j\v\w\s\2\f\n\l\w\0\7\f\5\q\4\u\i\6\k\q\i\o\a\o\c\0\q\x\l\2\9\j\2\f\r\d\y\2\4\s\t\r\0\k\0\7\j\0\v\4\g\l\w\7\3\c\y\x\8\n\y\z\s\g\c\1\n\p\2\5\q\j\e\x\9\q\1\q\n\n\h\o\w\s\6\9\0\c\b\e\j\w\b\p\3\6\w\6\v\0\q\e\s\0\f\o\i\p\5\p\i\p ]] 00:06:36.475 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:36.475 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:36.475 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:36.475 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.475 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:36.475 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:36.475 [2024-10-08 09:12:28.156013] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:36.475 [2024-10-08 09:12:28.156129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60921 ] 00:06:36.734 [2024-10-08 09:12:28.292331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.992 [2024-10-08 09:12:28.418463] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.992 [2024-10-08 09:12:28.473126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.992  [2024-10-08T09:12:28.933Z] Copying: 512/512 [B] (average 500 kBps) 00:06:37.250 00:06:37.250 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1fmkeidr64yt6vp9a9mlaoqsdcb7l2yulejq49gtdoj69vqrkbeb8dvg4hzp60nvplar6m8xzo8cvrvfxeuy66v0ytk8apf5apv25zxq8qzi7kcvm9xnrsvmaqcldlm67ya6i36p3kguth0anq5ejnyg19j5465drrzueqeiz4uu3mergbc9xv3turx1in1ivlnb31o5upkdcr2rywedze13f71l7pijvs8p5xkcs0yw533ks9z9axy5ew4klw5g98d6u580foz8ynpvm0g7psxf74kqdrl653qg3og3mpjj5zz3t5nblqyx80e3sdosqt9tas1og07nl0rcenmfywia4bwgym75htm0ubmipx55n7jyyg6wvkyyl0zs46t1qk16dyed47jxclvugcn0qcao7l24jrz1ktxp5ww0lpc3yktkgmwbuii7po2z5n3wfjyb0kcslxackfdub1yw6xg17n0m386gaj9u0539xqlzquivyxl420d8rq01eqed == \1\f\m\k\e\i\d\r\6\4\y\t\6\v\p\9\a\9\m\l\a\o\q\s\d\c\b\7\l\2\y\u\l\e\j\q\4\9\g\t\d\o\j\6\9\v\q\r\k\b\e\b\8\d\v\g\4\h\z\p\6\0\n\v\p\l\a\r\6\m\8\x\z\o\8\c\v\r\v\f\x\e\u\y\6\6\v\0\y\t\k\8\a\p\f\5\a\p\v\2\5\z\x\q\8\q\z\i\7\k\c\v\m\9\x\n\r\s\v\m\a\q\c\l\d\l\m\6\7\y\a\6\i\3\6\p\3\k\g\u\t\h\0\a\n\q\5\e\j\n\y\g\1\9\j\5\4\6\5\d\r\r\z\u\e\q\e\i\z\4\u\u\3\m\e\r\g\b\c\9\x\v\3\t\u\r\x\1\i\n\1\i\v\l\n\b\3\1\o\5\u\p\k\d\c\r\2\r\y\w\e\d\z\e\1\3\f\7\1\l\7\p\i\j\v\s\8\p\5\x\k\c\s\0\y\w\5\3\3\k\s\9\z\9\a\x\y\5\e\w\4\k\l\w\5\g\9\8\d\6\u\5\8\0\f\o\z\8\y\n\p\v\m\0\g\7\p\s\x\f\7\4\k\q\d\r\l\6\5\3\q\g\3\o\g\3\m\p\j\j\5\z\z\3\t\5\n\b\l\q\y\x\8\0\e\3\s\d\o\s\q\t\9\t\a\s\1\o\g\0\7\n\l\0\r\c\e\n\m\f\y\w\i\a\4\b\w\g\y\m\7\5\h\t\m\0\u\b\m\i\p\x\5\5\n\7\j\y\y\g\6\w\v\k\y\y\l\0\z\s\4\6\t\1\q\k\1\6\d\y\e\d\4\7\j\x\c\l\v\u\g\c\n\0\q\c\a\o\7\l\2\4\j\r\z\1\k\t\x\p\5\w\w\0\l\p\c\3\y\k\t\k\g\m\w\b\u\i\i\7\p\o\2\z\5\n\3\w\f\j\y\b\0\k\c\s\l\x\a\c\k\f\d\u\b\1\y\w\6\x\g\1\7\n\0\m\3\8\6\g\a\j\9\u\0\5\3\9\x\q\l\z\q\u\i\v\y\x\l\4\2\0\d\8\r\q\0\1\e\q\e\d ]] 00:06:37.250 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.251 09:12:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:37.251 [2024-10-08 09:12:28.828543] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:37.251 [2024-10-08 09:12:28.828666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60934 ] 00:06:37.509 [2024-10-08 09:12:28.967548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.509 [2024-10-08 09:12:29.086742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.509 [2024-10-08 09:12:29.140449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.509  [2024-10-08T09:12:29.450Z] Copying: 512/512 [B] (average 500 kBps) 00:06:37.767 00:06:37.767 09:12:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1fmkeidr64yt6vp9a9mlaoqsdcb7l2yulejq49gtdoj69vqrkbeb8dvg4hzp60nvplar6m8xzo8cvrvfxeuy66v0ytk8apf5apv25zxq8qzi7kcvm9xnrsvmaqcldlm67ya6i36p3kguth0anq5ejnyg19j5465drrzueqeiz4uu3mergbc9xv3turx1in1ivlnb31o5upkdcr2rywedze13f71l7pijvs8p5xkcs0yw533ks9z9axy5ew4klw5g98d6u580foz8ynpvm0g7psxf74kqdrl653qg3og3mpjj5zz3t5nblqyx80e3sdosqt9tas1og07nl0rcenmfywia4bwgym75htm0ubmipx55n7jyyg6wvkyyl0zs46t1qk16dyed47jxclvugcn0qcao7l24jrz1ktxp5ww0lpc3yktkgmwbuii7po2z5n3wfjyb0kcslxackfdub1yw6xg17n0m386gaj9u0539xqlzquivyxl420d8rq01eqed == \1\f\m\k\e\i\d\r\6\4\y\t\6\v\p\9\a\9\m\l\a\o\q\s\d\c\b\7\l\2\y\u\l\e\j\q\4\9\g\t\d\o\j\6\9\v\q\r\k\b\e\b\8\d\v\g\4\h\z\p\6\0\n\v\p\l\a\r\6\m\8\x\z\o\8\c\v\r\v\f\x\e\u\y\6\6\v\0\y\t\k\8\a\p\f\5\a\p\v\2\5\z\x\q\8\q\z\i\7\k\c\v\m\9\x\n\r\s\v\m\a\q\c\l\d\l\m\6\7\y\a\6\i\3\6\p\3\k\g\u\t\h\0\a\n\q\5\e\j\n\y\g\1\9\j\5\4\6\5\d\r\r\z\u\e\q\e\i\z\4\u\u\3\m\e\r\g\b\c\9\x\v\3\t\u\r\x\1\i\n\1\i\v\l\n\b\3\1\o\5\u\p\k\d\c\r\2\r\y\w\e\d\z\e\1\3\f\7\1\l\7\p\i\j\v\s\8\p\5\x\k\c\s\0\y\w\5\3\3\k\s\9\z\9\a\x\y\5\e\w\4\k\l\w\5\g\9\8\d\6\u\5\8\0\f\o\z\8\y\n\p\v\m\0\g\7\p\s\x\f\7\4\k\q\d\r\l\6\5\3\q\g\3\o\g\3\m\p\j\j\5\z\z\3\t\5\n\b\l\q\y\x\8\0\e\3\s\d\o\s\q\t\9\t\a\s\1\o\g\0\7\n\l\0\r\c\e\n\m\f\y\w\i\a\4\b\w\g\y\m\7\5\h\t\m\0\u\b\m\i\p\x\5\5\n\7\j\y\y\g\6\w\v\k\y\y\l\0\z\s\4\6\t\1\q\k\1\6\d\y\e\d\4\7\j\x\c\l\v\u\g\c\n\0\q\c\a\o\7\l\2\4\j\r\z\1\k\t\x\p\5\w\w\0\l\p\c\3\y\k\t\k\g\m\w\b\u\i\i\7\p\o\2\z\5\n\3\w\f\j\y\b\0\k\c\s\l\x\a\c\k\f\d\u\b\1\y\w\6\x\g\1\7\n\0\m\3\8\6\g\a\j\9\u\0\5\3\9\x\q\l\z\q\u\i\v\y\x\l\4\2\0\d\8\r\q\0\1\e\q\e\d ]] 00:06:37.767 09:12:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.767 09:12:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:38.027 [2024-10-08 09:12:29.478151] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:38.027 [2024-10-08 09:12:29.478297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60936 ] 00:06:38.027 [2024-10-08 09:12:29.613559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.285 [2024-10-08 09:12:29.731955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.285 [2024-10-08 09:12:29.785723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.285  [2024-10-08T09:12:30.227Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.544 00:06:38.544 09:12:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1fmkeidr64yt6vp9a9mlaoqsdcb7l2yulejq49gtdoj69vqrkbeb8dvg4hzp60nvplar6m8xzo8cvrvfxeuy66v0ytk8apf5apv25zxq8qzi7kcvm9xnrsvmaqcldlm67ya6i36p3kguth0anq5ejnyg19j5465drrzueqeiz4uu3mergbc9xv3turx1in1ivlnb31o5upkdcr2rywedze13f71l7pijvs8p5xkcs0yw533ks9z9axy5ew4klw5g98d6u580foz8ynpvm0g7psxf74kqdrl653qg3og3mpjj5zz3t5nblqyx80e3sdosqt9tas1og07nl0rcenmfywia4bwgym75htm0ubmipx55n7jyyg6wvkyyl0zs46t1qk16dyed47jxclvugcn0qcao7l24jrz1ktxp5ww0lpc3yktkgmwbuii7po2z5n3wfjyb0kcslxackfdub1yw6xg17n0m386gaj9u0539xqlzquivyxl420d8rq01eqed == \1\f\m\k\e\i\d\r\6\4\y\t\6\v\p\9\a\9\m\l\a\o\q\s\d\c\b\7\l\2\y\u\l\e\j\q\4\9\g\t\d\o\j\6\9\v\q\r\k\b\e\b\8\d\v\g\4\h\z\p\6\0\n\v\p\l\a\r\6\m\8\x\z\o\8\c\v\r\v\f\x\e\u\y\6\6\v\0\y\t\k\8\a\p\f\5\a\p\v\2\5\z\x\q\8\q\z\i\7\k\c\v\m\9\x\n\r\s\v\m\a\q\c\l\d\l\m\6\7\y\a\6\i\3\6\p\3\k\g\u\t\h\0\a\n\q\5\e\j\n\y\g\1\9\j\5\4\6\5\d\r\r\z\u\e\q\e\i\z\4\u\u\3\m\e\r\g\b\c\9\x\v\3\t\u\r\x\1\i\n\1\i\v\l\n\b\3\1\o\5\u\p\k\d\c\r\2\r\y\w\e\d\z\e\1\3\f\7\1\l\7\p\i\j\v\s\8\p\5\x\k\c\s\0\y\w\5\3\3\k\s\9\z\9\a\x\y\5\e\w\4\k\l\w\5\g\9\8\d\6\u\5\8\0\f\o\z\8\y\n\p\v\m\0\g\7\p\s\x\f\7\4\k\q\d\r\l\6\5\3\q\g\3\o\g\3\m\p\j\j\5\z\z\3\t\5\n\b\l\q\y\x\8\0\e\3\s\d\o\s\q\t\9\t\a\s\1\o\g\0\7\n\l\0\r\c\e\n\m\f\y\w\i\a\4\b\w\g\y\m\7\5\h\t\m\0\u\b\m\i\p\x\5\5\n\7\j\y\y\g\6\w\v\k\y\y\l\0\z\s\4\6\t\1\q\k\1\6\d\y\e\d\4\7\j\x\c\l\v\u\g\c\n\0\q\c\a\o\7\l\2\4\j\r\z\1\k\t\x\p\5\w\w\0\l\p\c\3\y\k\t\k\g\m\w\b\u\i\i\7\p\o\2\z\5\n\3\w\f\j\y\b\0\k\c\s\l\x\a\c\k\f\d\u\b\1\y\w\6\x\g\1\7\n\0\m\3\8\6\g\a\j\9\u\0\5\3\9\x\q\l\z\q\u\i\v\y\x\l\4\2\0\d\8\r\q\0\1\e\q\e\d ]] 00:06:38.544 09:12:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.544 09:12:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:38.544 [2024-10-08 09:12:30.140769] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:38.544 [2024-10-08 09:12:30.140891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60949 ] 00:06:38.803 [2024-10-08 09:12:30.278863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.803 [2024-10-08 09:12:30.391777] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.803 [2024-10-08 09:12:30.448648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.803  [2024-10-08T09:12:31.054Z] Copying: 512/512 [B] (average 500 kBps) 00:06:39.371 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1fmkeidr64yt6vp9a9mlaoqsdcb7l2yulejq49gtdoj69vqrkbeb8dvg4hzp60nvplar6m8xzo8cvrvfxeuy66v0ytk8apf5apv25zxq8qzi7kcvm9xnrsvmaqcldlm67ya6i36p3kguth0anq5ejnyg19j5465drrzueqeiz4uu3mergbc9xv3turx1in1ivlnb31o5upkdcr2rywedze13f71l7pijvs8p5xkcs0yw533ks9z9axy5ew4klw5g98d6u580foz8ynpvm0g7psxf74kqdrl653qg3og3mpjj5zz3t5nblqyx80e3sdosqt9tas1og07nl0rcenmfywia4bwgym75htm0ubmipx55n7jyyg6wvkyyl0zs46t1qk16dyed47jxclvugcn0qcao7l24jrz1ktxp5ww0lpc3yktkgmwbuii7po2z5n3wfjyb0kcslxackfdub1yw6xg17n0m386gaj9u0539xqlzquivyxl420d8rq01eqed == \1\f\m\k\e\i\d\r\6\4\y\t\6\v\p\9\a\9\m\l\a\o\q\s\d\c\b\7\l\2\y\u\l\e\j\q\4\9\g\t\d\o\j\6\9\v\q\r\k\b\e\b\8\d\v\g\4\h\z\p\6\0\n\v\p\l\a\r\6\m\8\x\z\o\8\c\v\r\v\f\x\e\u\y\6\6\v\0\y\t\k\8\a\p\f\5\a\p\v\2\5\z\x\q\8\q\z\i\7\k\c\v\m\9\x\n\r\s\v\m\a\q\c\l\d\l\m\6\7\y\a\6\i\3\6\p\3\k\g\u\t\h\0\a\n\q\5\e\j\n\y\g\1\9\j\5\4\6\5\d\r\r\z\u\e\q\e\i\z\4\u\u\3\m\e\r\g\b\c\9\x\v\3\t\u\r\x\1\i\n\1\i\v\l\n\b\3\1\o\5\u\p\k\d\c\r\2\r\y\w\e\d\z\e\1\3\f\7\1\l\7\p\i\j\v\s\8\p\5\x\k\c\s\0\y\w\5\3\3\k\s\9\z\9\a\x\y\5\e\w\4\k\l\w\5\g\9\8\d\6\u\5\8\0\f\o\z\8\y\n\p\v\m\0\g\7\p\s\x\f\7\4\k\q\d\r\l\6\5\3\q\g\3\o\g\3\m\p\j\j\5\z\z\3\t\5\n\b\l\q\y\x\8\0\e\3\s\d\o\s\q\t\9\t\a\s\1\o\g\0\7\n\l\0\r\c\e\n\m\f\y\w\i\a\4\b\w\g\y\m\7\5\h\t\m\0\u\b\m\i\p\x\5\5\n\7\j\y\y\g\6\w\v\k\y\y\l\0\z\s\4\6\t\1\q\k\1\6\d\y\e\d\4\7\j\x\c\l\v\u\g\c\n\0\q\c\a\o\7\l\2\4\j\r\z\1\k\t\x\p\5\w\w\0\l\p\c\3\y\k\t\k\g\m\w\b\u\i\i\7\p\o\2\z\5\n\3\w\f\j\y\b\0\k\c\s\l\x\a\c\k\f\d\u\b\1\y\w\6\x\g\1\7\n\0\m\3\8\6\g\a\j\9\u\0\5\3\9\x\q\l\z\q\u\i\v\y\x\l\4\2\0\d\8\r\q\0\1\e\q\e\d ]] 00:06:39.371 00:06:39.371 real 0m5.431s 00:06:39.371 user 0m3.174s 00:06:39.371 sys 0m1.274s 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.371 ************************************ 00:06:39.371 END TEST dd_flags_misc_forced_aio 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:39.371 ************************************ 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:39.371 00:06:39.371 real 0m23.650s 00:06:39.371 user 0m12.467s 00:06:39.371 sys 0m7.120s 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.371 09:12:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.371 ************************************ 00:06:39.371 END TEST spdk_dd_posix 00:06:39.371 ************************************ 00:06:39.371 09:12:30 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:39.371 09:12:30 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.371 09:12:30 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.371 09:12:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:39.371 ************************************ 00:06:39.371 START TEST spdk_dd_malloc 00:06:39.371 ************************************ 00:06:39.371 09:12:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:39.371 * Looking for test storage... 00:06:39.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:39.371 09:12:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.371 09:12:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.371 09:12:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.371 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.631 --rc genhtml_branch_coverage=1 00:06:39.631 --rc genhtml_function_coverage=1 00:06:39.631 --rc genhtml_legend=1 00:06:39.631 --rc geninfo_all_blocks=1 00:06:39.631 --rc geninfo_unexecuted_blocks=1 00:06:39.631 00:06:39.631 ' 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.631 --rc genhtml_branch_coverage=1 00:06:39.631 --rc genhtml_function_coverage=1 00:06:39.631 --rc genhtml_legend=1 00:06:39.631 --rc geninfo_all_blocks=1 00:06:39.631 --rc geninfo_unexecuted_blocks=1 00:06:39.631 00:06:39.631 ' 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.631 --rc genhtml_branch_coverage=1 00:06:39.631 --rc genhtml_function_coverage=1 00:06:39.631 --rc genhtml_legend=1 00:06:39.631 --rc geninfo_all_blocks=1 00:06:39.631 --rc geninfo_unexecuted_blocks=1 00:06:39.631 00:06:39.631 ' 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.631 --rc genhtml_branch_coverage=1 00:06:39.631 --rc genhtml_function_coverage=1 00:06:39.631 --rc genhtml_legend=1 00:06:39.631 --rc geninfo_all_blocks=1 00:06:39.631 --rc geninfo_unexecuted_blocks=1 00:06:39.631 00:06:39.631 ' 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:39.631 ************************************ 00:06:39.631 START TEST dd_malloc_copy 00:06:39.631 ************************************ 00:06:39.631 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:39.632 09:12:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.632 { 00:06:39.632 "subsystems": [ 00:06:39.632 { 00:06:39.632 "subsystem": "bdev", 00:06:39.632 "config": [ 00:06:39.632 { 00:06:39.632 "params": { 00:06:39.632 "block_size": 512, 00:06:39.632 "num_blocks": 1048576, 00:06:39.632 "name": "malloc0" 00:06:39.632 }, 00:06:39.632 "method": "bdev_malloc_create" 00:06:39.632 }, 00:06:39.632 { 00:06:39.632 "params": { 00:06:39.632 "block_size": 512, 00:06:39.632 "num_blocks": 1048576, 00:06:39.632 "name": "malloc1" 00:06:39.632 }, 00:06:39.632 "method": "bdev_malloc_create" 00:06:39.632 }, 00:06:39.632 { 00:06:39.632 "method": "bdev_wait_for_examine" 00:06:39.632 } 00:06:39.632 ] 00:06:39.632 } 00:06:39.632 ] 00:06:39.632 } 00:06:39.632 [2024-10-08 09:12:31.157403] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:39.632 [2024-10-08 09:12:31.157579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61031 ] 00:06:39.632 [2024-10-08 09:12:31.307513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.891 [2024-10-08 09:12:31.425572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.891 [2024-10-08 09:12:31.480882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.268  [2024-10-08T09:12:33.888Z] Copying: 213/512 [MB] (213 MBps) [2024-10-08T09:12:34.479Z] Copying: 420/512 [MB] (207 MBps) [2024-10-08T09:12:35.047Z] Copying: 512/512 [MB] (average 210 MBps) 00:06:43.365 00:06:43.365 09:12:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:43.365 09:12:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:43.365 09:12:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:43.365 09:12:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.365 [2024-10-08 09:12:34.950468] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:43.365 [2024-10-08 09:12:34.950597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61084 ] 00:06:43.365 { 00:06:43.365 "subsystems": [ 00:06:43.365 { 00:06:43.365 "subsystem": "bdev", 00:06:43.365 "config": [ 00:06:43.365 { 00:06:43.365 "params": { 00:06:43.365 "block_size": 512, 00:06:43.365 "num_blocks": 1048576, 00:06:43.365 "name": "malloc0" 00:06:43.365 }, 00:06:43.365 "method": "bdev_malloc_create" 00:06:43.365 }, 00:06:43.365 { 00:06:43.365 "params": { 00:06:43.365 "block_size": 512, 00:06:43.365 "num_blocks": 1048576, 00:06:43.365 "name": "malloc1" 00:06:43.365 }, 00:06:43.365 "method": "bdev_malloc_create" 00:06:43.365 }, 00:06:43.365 { 00:06:43.365 "method": "bdev_wait_for_examine" 00:06:43.365 } 00:06:43.365 ] 00:06:43.365 } 00:06:43.365 ] 00:06:43.365 } 00:06:43.623 [2024-10-08 09:12:35.090877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.623 [2024-10-08 09:12:35.214429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.623 [2024-10-08 09:12:35.268460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.997  [2024-10-08T09:12:37.616Z] Copying: 213/512 [MB] (213 MBps) [2024-10-08T09:12:38.184Z] Copying: 417/512 [MB] (204 MBps) [2024-10-08T09:12:38.825Z] Copying: 512/512 [MB] (average 204 MBps) 00:06:47.142 00:06:47.142 00:06:47.142 real 0m7.701s 00:06:47.142 user 0m6.657s 00:06:47.142 sys 0m0.886s 00:06:47.142 09:12:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.142 ************************************ 00:06:47.142 END TEST dd_malloc_copy 00:06:47.142 ************************************ 00:06:47.142 09:12:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.142 00:06:47.142 real 0m7.949s 00:06:47.142 user 0m6.778s 00:06:47.142 sys 0m1.019s 00:06:47.142 09:12:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.142 09:12:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:47.142 ************************************ 00:06:47.142 END TEST spdk_dd_malloc 00:06:47.142 ************************************ 00:06:47.402 09:12:38 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:47.402 09:12:38 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:47.402 09:12:38 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.402 09:12:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:47.402 ************************************ 00:06:47.402 START TEST spdk_dd_bdev_to_bdev 00:06:47.402 ************************************ 00:06:47.402 09:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:47.402 * Looking for test storage... 00:06:47.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:47.402 09:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.402 09:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.402 09:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.402 --rc genhtml_branch_coverage=1 00:06:47.402 --rc genhtml_function_coverage=1 00:06:47.402 --rc genhtml_legend=1 00:06:47.402 --rc geninfo_all_blocks=1 00:06:47.402 --rc geninfo_unexecuted_blocks=1 00:06:47.402 00:06:47.402 ' 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.402 --rc genhtml_branch_coverage=1 00:06:47.402 --rc genhtml_function_coverage=1 00:06:47.402 --rc genhtml_legend=1 00:06:47.402 --rc geninfo_all_blocks=1 00:06:47.402 --rc geninfo_unexecuted_blocks=1 00:06:47.402 00:06:47.402 ' 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.402 --rc genhtml_branch_coverage=1 00:06:47.402 --rc genhtml_function_coverage=1 00:06:47.402 --rc genhtml_legend=1 00:06:47.402 --rc geninfo_all_blocks=1 00:06:47.402 --rc geninfo_unexecuted_blocks=1 00:06:47.402 00:06:47.402 ' 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.402 --rc genhtml_branch_coverage=1 00:06:47.402 --rc genhtml_function_coverage=1 00:06:47.402 --rc genhtml_legend=1 00:06:47.402 --rc geninfo_all_blocks=1 00:06:47.402 --rc geninfo_unexecuted_blocks=1 00:06:47.402 00:06:47.402 ' 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:47.402 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.403 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:47.662 ************************************ 00:06:47.662 START TEST dd_inflate_file 00:06:47.662 ************************************ 00:06:47.662 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:47.662 [2024-10-08 09:12:39.147799] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:47.662 [2024-10-08 09:12:39.147909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61202 ] 00:06:47.662 [2024-10-08 09:12:39.289954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.922 [2024-10-08 09:12:39.435549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.922 [2024-10-08 09:12:39.496875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.922  [2024-10-08T09:12:39.863Z] Copying: 64/64 [MB] (average 1391 MBps) 00:06:48.180 00:06:48.181 ************************************ 00:06:48.181 END TEST dd_inflate_file 00:06:48.181 ************************************ 00:06:48.181 00:06:48.181 real 0m0.730s 00:06:48.181 user 0m0.464s 00:06:48.181 sys 0m0.335s 00:06:48.181 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.181 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:48.439 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.440 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:48.440 ************************************ 00:06:48.440 START TEST dd_copy_to_out_bdev 00:06:48.440 ************************************ 00:06:48.440 09:12:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:48.440 { 00:06:48.440 "subsystems": [ 00:06:48.440 { 00:06:48.440 "subsystem": "bdev", 00:06:48.440 "config": [ 00:06:48.440 { 00:06:48.440 "params": { 00:06:48.440 "trtype": "pcie", 00:06:48.440 "traddr": "0000:00:10.0", 00:06:48.440 "name": "Nvme0" 00:06:48.440 }, 00:06:48.440 "method": "bdev_nvme_attach_controller" 00:06:48.440 }, 00:06:48.440 { 00:06:48.440 "params": { 00:06:48.440 "trtype": "pcie", 00:06:48.440 "traddr": "0000:00:11.0", 00:06:48.440 "name": "Nvme1" 00:06:48.440 }, 00:06:48.440 "method": "bdev_nvme_attach_controller" 00:06:48.440 }, 00:06:48.440 { 00:06:48.440 "method": "bdev_wait_for_examine" 00:06:48.440 } 00:06:48.440 ] 00:06:48.440 } 00:06:48.440 ] 00:06:48.440 } 00:06:48.440 [2024-10-08 09:12:39.933617] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:48.440 [2024-10-08 09:12:39.934001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61233 ] 00:06:48.440 [2024-10-08 09:12:40.073915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.698 [2024-10-08 09:12:40.199263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.698 [2024-10-08 09:12:40.256528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.073  [2024-10-08T09:12:41.756Z] Copying: 64/64 [MB] (average 64 MBps) 00:06:50.073 00:06:50.073 ************************************ 00:06:50.073 END TEST dd_copy_to_out_bdev 00:06:50.073 ************************************ 00:06:50.073 00:06:50.073 real 0m1.822s 00:06:50.073 user 0m1.565s 00:06:50.073 sys 0m1.365s 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.073 ************************************ 00:06:50.073 START TEST dd_offset_magic 00:06:50.073 ************************************ 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:50.073 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:50.332 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:50.332 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:50.332 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:50.332 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:50.332 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:50.332 09:12:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:50.332 [2024-10-08 09:12:41.815885] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:50.332 [2024-10-08 09:12:41.816688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61276 ] 00:06:50.332 { 00:06:50.332 "subsystems": [ 00:06:50.332 { 00:06:50.332 "subsystem": "bdev", 00:06:50.332 "config": [ 00:06:50.332 { 00:06:50.332 "params": { 00:06:50.332 "trtype": "pcie", 00:06:50.332 "traddr": "0000:00:10.0", 00:06:50.332 "name": "Nvme0" 00:06:50.332 }, 00:06:50.332 "method": "bdev_nvme_attach_controller" 00:06:50.332 }, 00:06:50.332 { 00:06:50.332 "params": { 00:06:50.332 "trtype": "pcie", 00:06:50.332 "traddr": "0000:00:11.0", 00:06:50.332 "name": "Nvme1" 00:06:50.332 }, 00:06:50.332 "method": "bdev_nvme_attach_controller" 00:06:50.332 }, 00:06:50.332 { 00:06:50.332 "method": "bdev_wait_for_examine" 00:06:50.332 } 00:06:50.332 ] 00:06:50.332 } 00:06:50.332 ] 00:06:50.332 } 00:06:50.332 [2024-10-08 09:12:41.956755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.591 [2024-10-08 09:12:42.088069] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.591 [2024-10-08 09:12:42.150855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.850  [2024-10-08T09:12:42.792Z] Copying: 65/65 [MB] (average 1031 MBps) 00:06:51.109 00:06:51.109 09:12:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:51.109 09:12:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:51.109 09:12:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:51.109 09:12:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:51.109 [2024-10-08 09:12:42.745748] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:51.109 [2024-10-08 09:12:42.745855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61296 ] 00:06:51.109 { 00:06:51.109 "subsystems": [ 00:06:51.109 { 00:06:51.109 "subsystem": "bdev", 00:06:51.109 "config": [ 00:06:51.109 { 00:06:51.109 "params": { 00:06:51.109 "trtype": "pcie", 00:06:51.109 "traddr": "0000:00:10.0", 00:06:51.109 "name": "Nvme0" 00:06:51.109 }, 00:06:51.109 "method": "bdev_nvme_attach_controller" 00:06:51.109 }, 00:06:51.109 { 00:06:51.109 "params": { 00:06:51.109 "trtype": "pcie", 00:06:51.109 "traddr": "0000:00:11.0", 00:06:51.109 "name": "Nvme1" 00:06:51.109 }, 00:06:51.109 "method": "bdev_nvme_attach_controller" 00:06:51.109 }, 00:06:51.109 { 00:06:51.109 "method": "bdev_wait_for_examine" 00:06:51.109 } 00:06:51.109 ] 00:06:51.109 } 00:06:51.109 ] 00:06:51.109 } 00:06:51.391 [2024-10-08 09:12:42.886378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.391 [2024-10-08 09:12:43.010376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.650 [2024-10-08 09:12:43.067013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.650  [2024-10-08T09:12:43.592Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:51.909 00:06:51.909 09:12:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:51.909 09:12:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:51.909 09:12:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:51.909 09:12:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:51.909 09:12:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:51.909 09:12:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:51.909 09:12:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:51.909 { 00:06:51.909 "subsystems": [ 00:06:51.909 { 00:06:51.909 "subsystem": "bdev", 00:06:51.909 "config": [ 00:06:51.909 { 00:06:51.909 "params": { 00:06:51.909 "trtype": "pcie", 00:06:51.909 "traddr": "0000:00:10.0", 00:06:51.909 "name": "Nvme0" 00:06:51.909 }, 00:06:51.909 "method": "bdev_nvme_attach_controller" 00:06:51.909 }, 00:06:51.909 { 00:06:51.909 "params": { 00:06:51.909 "trtype": "pcie", 00:06:51.909 "traddr": "0000:00:11.0", 00:06:51.909 "name": "Nvme1" 00:06:51.909 }, 00:06:51.909 "method": "bdev_nvme_attach_controller" 00:06:51.909 }, 00:06:51.909 { 00:06:51.909 "method": "bdev_wait_for_examine" 00:06:51.909 } 00:06:51.909 ] 00:06:51.909 } 00:06:51.909 ] 00:06:51.909 } 00:06:51.909 [2024-10-08 09:12:43.527376] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:51.909 [2024-10-08 09:12:43.527485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61318 ] 00:06:52.168 [2024-10-08 09:12:43.668200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.168 [2024-10-08 09:12:43.776660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.168 [2024-10-08 09:12:43.839352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.426  [2024-10-08T09:12:44.368Z] Copying: 65/65 [MB] (average 1120 MBps) 00:06:52.685 00:06:52.685 09:12:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:52.685 09:12:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:52.685 09:12:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:52.685 09:12:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:52.943 [2024-10-08 09:12:44.415237] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:52.943 [2024-10-08 09:12:44.415337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61338 ] 00:06:52.943 { 00:06:52.943 "subsystems": [ 00:06:52.943 { 00:06:52.943 "subsystem": "bdev", 00:06:52.943 "config": [ 00:06:52.943 { 00:06:52.943 "params": { 00:06:52.943 "trtype": "pcie", 00:06:52.943 "traddr": "0000:00:10.0", 00:06:52.943 "name": "Nvme0" 00:06:52.943 }, 00:06:52.943 "method": "bdev_nvme_attach_controller" 00:06:52.943 }, 00:06:52.944 { 00:06:52.944 "params": { 00:06:52.944 "trtype": "pcie", 00:06:52.944 "traddr": "0000:00:11.0", 00:06:52.944 "name": "Nvme1" 00:06:52.944 }, 00:06:52.944 "method": "bdev_nvme_attach_controller" 00:06:52.944 }, 00:06:52.944 { 00:06:52.944 "method": "bdev_wait_for_examine" 00:06:52.944 } 00:06:52.944 ] 00:06:52.944 } 00:06:52.944 ] 00:06:52.944 } 00:06:52.944 [2024-10-08 09:12:44.550960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.202 [2024-10-08 09:12:44.664395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.202 [2024-10-08 09:12:44.721055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.460  [2024-10-08T09:12:45.143Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:53.460 00:06:53.719 ************************************ 00:06:53.719 END TEST dd_offset_magic 00:06:53.719 ************************************ 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:53.719 00:06:53.719 real 0m3.394s 00:06:53.719 user 0m2.475s 00:06:53.719 sys 0m1.004s 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:53.719 09:12:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:53.719 [2024-10-08 09:12:45.255085] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:53.719 [2024-10-08 09:12:45.255413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61370 ] 00:06:53.719 { 00:06:53.719 "subsystems": [ 00:06:53.719 { 00:06:53.719 "subsystem": "bdev", 00:06:53.719 "config": [ 00:06:53.719 { 00:06:53.719 "params": { 00:06:53.719 "trtype": "pcie", 00:06:53.719 "traddr": "0000:00:10.0", 00:06:53.719 "name": "Nvme0" 00:06:53.719 }, 00:06:53.719 "method": "bdev_nvme_attach_controller" 00:06:53.719 }, 00:06:53.719 { 00:06:53.719 "params": { 00:06:53.719 "trtype": "pcie", 00:06:53.719 "traddr": "0000:00:11.0", 00:06:53.719 "name": "Nvme1" 00:06:53.719 }, 00:06:53.719 "method": "bdev_nvme_attach_controller" 00:06:53.719 }, 00:06:53.719 { 00:06:53.719 "method": "bdev_wait_for_examine" 00:06:53.719 } 00:06:53.719 ] 00:06:53.719 } 00:06:53.719 ] 00:06:53.719 } 00:06:53.719 [2024-10-08 09:12:45.395114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.978 [2024-10-08 09:12:45.520610] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.978 [2024-10-08 09:12:45.578407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.236  [2024-10-08T09:12:46.178Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:54.495 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:54.495 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:54.495 { 00:06:54.495 "subsystems": [ 00:06:54.495 { 00:06:54.495 "subsystem": "bdev", 00:06:54.495 "config": [ 00:06:54.495 { 00:06:54.495 "params": { 00:06:54.495 "trtype": "pcie", 00:06:54.495 "traddr": "0000:00:10.0", 00:06:54.495 "name": "Nvme0" 00:06:54.495 }, 00:06:54.495 "method": "bdev_nvme_attach_controller" 00:06:54.495 }, 00:06:54.495 { 00:06:54.495 "params": { 00:06:54.495 "trtype": "pcie", 00:06:54.495 "traddr": "0000:00:11.0", 00:06:54.495 "name": "Nvme1" 00:06:54.495 }, 00:06:54.495 "method": "bdev_nvme_attach_controller" 00:06:54.495 }, 00:06:54.495 { 00:06:54.495 "method": "bdev_wait_for_examine" 00:06:54.495 } 00:06:54.495 ] 00:06:54.495 } 00:06:54.495 ] 00:06:54.495 } 00:06:54.495 [2024-10-08 09:12:46.069840] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:54.495 [2024-10-08 09:12:46.069954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61385 ] 00:06:54.754 [2024-10-08 09:12:46.216648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.754 [2024-10-08 09:12:46.342193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.754 [2024-10-08 09:12:46.402535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.013  [2024-10-08T09:12:46.955Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:06:55.272 00:06:55.272 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:55.272 ************************************ 00:06:55.272 END TEST spdk_dd_bdev_to_bdev 00:06:55.272 ************************************ 00:06:55.272 00:06:55.272 real 0m7.985s 00:06:55.272 user 0m5.869s 00:06:55.272 sys 0m3.489s 00:06:55.272 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.272 09:12:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.272 09:12:46 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:55.272 09:12:46 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:55.272 09:12:46 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.272 09:12:46 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.272 09:12:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:55.272 ************************************ 00:06:55.272 START TEST spdk_dd_uring 00:06:55.272 ************************************ 00:06:55.272 09:12:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:55.531 * Looking for test storage... 00:06:55.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.531 --rc genhtml_branch_coverage=1 00:06:55.531 --rc genhtml_function_coverage=1 00:06:55.531 --rc genhtml_legend=1 00:06:55.531 --rc geninfo_all_blocks=1 00:06:55.531 --rc geninfo_unexecuted_blocks=1 00:06:55.531 00:06:55.531 ' 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.531 --rc genhtml_branch_coverage=1 00:06:55.531 --rc genhtml_function_coverage=1 00:06:55.531 --rc genhtml_legend=1 00:06:55.531 --rc geninfo_all_blocks=1 00:06:55.531 --rc geninfo_unexecuted_blocks=1 00:06:55.531 00:06:55.531 ' 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.531 --rc genhtml_branch_coverage=1 00:06:55.531 --rc genhtml_function_coverage=1 00:06:55.531 --rc genhtml_legend=1 00:06:55.531 --rc geninfo_all_blocks=1 00:06:55.531 --rc geninfo_unexecuted_blocks=1 00:06:55.531 00:06:55.531 ' 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.531 --rc genhtml_branch_coverage=1 00:06:55.531 --rc genhtml_function_coverage=1 00:06:55.531 --rc genhtml_legend=1 00:06:55.531 --rc geninfo_all_blocks=1 00:06:55.531 --rc geninfo_unexecuted_blocks=1 00:06:55.531 00:06:55.531 ' 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.531 09:12:47 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:55.532 ************************************ 00:06:55.532 START TEST dd_uring_copy 00:06:55.532 ************************************ 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:55.532 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.791 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=zgvdwifqaxorbjk47etbp4l6lgwu5efkgsydlrhlmbqf4dw3mpkstg7l84waclt4b01jwsmpnudi3nk2j8up7s9fpol141q408gw44aq8a12ifu5i4jr60sddms3502ed9j0zcrui5dnblzgau5ozrycujbxdxon7air9qqgg9hpl0edbmif9axva1p9jzmu8ri8p7m8xz5xa4rmzafzi7rg59m7ofbtl1y1ofla69sztpt5ugivsml6jil4k7rhnly3cm38p92p3o6nkx02588evon6tjwven0u5j2756xyuw6fp12f6qm9z2qcfmgg2frkeafz9nkimsq5qwucb2va07ycqn2ye0wkiqwy6m4c9v8vaajzqgweou1n9yvqyagvhkzwutv1e18dbvdm6ord1pzyl84widjnw1nws4fz6n195utrb5yz5fqickc5pv1tcz0lctzgmfryyy9d947zef6dxuazakrlo5iowfoebnghamr00lim8rc7whbab93j2m09gun2wstbxqoo7xpcxaij3npzecggtgytpeqalccpl1al64ipwcoobo26ukmttjk34vbycn4k81a0o19k7nykdvd2dr6dtkbp2iwelv6njcxhk142b7mm7po7qau67f3gqgrt643n1hjq7mseysmhe7ye9jprds707bcbuqo7kqypbcrbz00hczlz0bwk2knuxk761p8b0hqi9t4mpxvydxli4rqcz31q4ckozzsg1rtcm8ajt6uqcrki7bh1ypo3m34m5xeqb3g605ps4uihnbchxkrp4szm0va49wpf5z19mzv72sj6xvypj2darz7nfxd07ds7bkhdzmil0lwzula2t97ob1m03qj5ffnv95pyrkg4fi8drsszowata5fni5p40jzwwy2o4uno5x0olwpw9iplaj0dh853e88nv92nbqh2n6rfrkmv4tao3gdnkl1bfz2nsnjio9cxrbbr8be8a1tnuav6137c9eox183si1gy6drivl8k 00:06:55.791 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo zgvdwifqaxorbjk47etbp4l6lgwu5efkgsydlrhlmbqf4dw3mpkstg7l84waclt4b01jwsmpnudi3nk2j8up7s9fpol141q408gw44aq8a12ifu5i4jr60sddms3502ed9j0zcrui5dnblzgau5ozrycujbxdxon7air9qqgg9hpl0edbmif9axva1p9jzmu8ri8p7m8xz5xa4rmzafzi7rg59m7ofbtl1y1ofla69sztpt5ugivsml6jil4k7rhnly3cm38p92p3o6nkx02588evon6tjwven0u5j2756xyuw6fp12f6qm9z2qcfmgg2frkeafz9nkimsq5qwucb2va07ycqn2ye0wkiqwy6m4c9v8vaajzqgweou1n9yvqyagvhkzwutv1e18dbvdm6ord1pzyl84widjnw1nws4fz6n195utrb5yz5fqickc5pv1tcz0lctzgmfryyy9d947zef6dxuazakrlo5iowfoebnghamr00lim8rc7whbab93j2m09gun2wstbxqoo7xpcxaij3npzecggtgytpeqalccpl1al64ipwcoobo26ukmttjk34vbycn4k81a0o19k7nykdvd2dr6dtkbp2iwelv6njcxhk142b7mm7po7qau67f3gqgrt643n1hjq7mseysmhe7ye9jprds707bcbuqo7kqypbcrbz00hczlz0bwk2knuxk761p8b0hqi9t4mpxvydxli4rqcz31q4ckozzsg1rtcm8ajt6uqcrki7bh1ypo3m34m5xeqb3g605ps4uihnbchxkrp4szm0va49wpf5z19mzv72sj6xvypj2darz7nfxd07ds7bkhdzmil0lwzula2t97ob1m03qj5ffnv95pyrkg4fi8drsszowata5fni5p40jzwwy2o4uno5x0olwpw9iplaj0dh853e88nv92nbqh2n6rfrkmv4tao3gdnkl1bfz2nsnjio9cxrbbr8be8a1tnuav6137c9eox183si1gy6drivl8k 00:06:55.791 09:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:55.791 [2024-10-08 09:12:47.274269] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:55.791 [2024-10-08 09:12:47.274616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61469 ] 00:06:55.791 [2024-10-08 09:12:47.416280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.050 [2024-10-08 09:12:47.584509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.050 [2024-10-08 09:12:47.663061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.986  [2024-10-08T09:12:49.237Z] Copying: 511/511 [MB] (average 1005 MBps) 00:06:57.554 00:06:57.554 09:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:57.554 09:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:57.554 09:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:57.554 09:12:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.554 [2024-10-08 09:12:49.139649] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:57.554 [2024-10-08 09:12:49.139792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61490 ] 00:06:57.554 { 00:06:57.554 "subsystems": [ 00:06:57.554 { 00:06:57.554 "subsystem": "bdev", 00:06:57.554 "config": [ 00:06:57.554 { 00:06:57.554 "params": { 00:06:57.554 "block_size": 512, 00:06:57.554 "num_blocks": 1048576, 00:06:57.554 "name": "malloc0" 00:06:57.554 }, 00:06:57.554 "method": "bdev_malloc_create" 00:06:57.554 }, 00:06:57.554 { 00:06:57.554 "params": { 00:06:57.554 "filename": "/dev/zram1", 00:06:57.554 "name": "uring0" 00:06:57.554 }, 00:06:57.554 "method": "bdev_uring_create" 00:06:57.554 }, 00:06:57.554 { 00:06:57.554 "method": "bdev_wait_for_examine" 00:06:57.554 } 00:06:57.554 ] 00:06:57.554 } 00:06:57.554 ] 00:06:57.554 } 00:06:57.813 [2024-10-08 09:12:49.276826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.813 [2024-10-08 09:12:49.420281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.072 [2024-10-08 09:12:49.503633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.445  [2024-10-08T09:12:52.089Z] Copying: 192/512 [MB] (192 MBps) [2024-10-08T09:12:52.347Z] Copying: 404/512 [MB] (212 MBps) [2024-10-08T09:12:52.915Z] Copying: 512/512 [MB] (average 206 MBps) 00:07:01.232 00:07:01.232 09:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:01.232 09:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:01.232 09:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.232 09:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.232 [2024-10-08 09:12:52.732903] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:01.232 [2024-10-08 09:12:52.733008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61540 ] 00:07:01.232 { 00:07:01.232 "subsystems": [ 00:07:01.232 { 00:07:01.232 "subsystem": "bdev", 00:07:01.232 "config": [ 00:07:01.232 { 00:07:01.232 "params": { 00:07:01.232 "block_size": 512, 00:07:01.232 "num_blocks": 1048576, 00:07:01.232 "name": "malloc0" 00:07:01.232 }, 00:07:01.232 "method": "bdev_malloc_create" 00:07:01.232 }, 00:07:01.232 { 00:07:01.232 "params": { 00:07:01.232 "filename": "/dev/zram1", 00:07:01.232 "name": "uring0" 00:07:01.232 }, 00:07:01.232 "method": "bdev_uring_create" 00:07:01.232 }, 00:07:01.232 { 00:07:01.232 "method": "bdev_wait_for_examine" 00:07:01.232 } 00:07:01.232 ] 00:07:01.232 } 00:07:01.232 ] 00:07:01.232 } 00:07:01.232 [2024-10-08 09:12:52.873916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.492 [2024-10-08 09:12:52.949727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.492 [2024-10-08 09:12:53.005898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.868  [2024-10-08T09:12:55.488Z] Copying: 193/512 [MB] (193 MBps) [2024-10-08T09:12:56.423Z] Copying: 365/512 [MB] (172 MBps) [2024-10-08T09:12:56.683Z] Copying: 512/512 [MB] (average 174 MBps) 00:07:05.000 00:07:05.000 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:05.000 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ zgvdwifqaxorbjk47etbp4l6lgwu5efkgsydlrhlmbqf4dw3mpkstg7l84waclt4b01jwsmpnudi3nk2j8up7s9fpol141q408gw44aq8a12ifu5i4jr60sddms3502ed9j0zcrui5dnblzgau5ozrycujbxdxon7air9qqgg9hpl0edbmif9axva1p9jzmu8ri8p7m8xz5xa4rmzafzi7rg59m7ofbtl1y1ofla69sztpt5ugivsml6jil4k7rhnly3cm38p92p3o6nkx02588evon6tjwven0u5j2756xyuw6fp12f6qm9z2qcfmgg2frkeafz9nkimsq5qwucb2va07ycqn2ye0wkiqwy6m4c9v8vaajzqgweou1n9yvqyagvhkzwutv1e18dbvdm6ord1pzyl84widjnw1nws4fz6n195utrb5yz5fqickc5pv1tcz0lctzgmfryyy9d947zef6dxuazakrlo5iowfoebnghamr00lim8rc7whbab93j2m09gun2wstbxqoo7xpcxaij3npzecggtgytpeqalccpl1al64ipwcoobo26ukmttjk34vbycn4k81a0o19k7nykdvd2dr6dtkbp2iwelv6njcxhk142b7mm7po7qau67f3gqgrt643n1hjq7mseysmhe7ye9jprds707bcbuqo7kqypbcrbz00hczlz0bwk2knuxk761p8b0hqi9t4mpxvydxli4rqcz31q4ckozzsg1rtcm8ajt6uqcrki7bh1ypo3m34m5xeqb3g605ps4uihnbchxkrp4szm0va49wpf5z19mzv72sj6xvypj2darz7nfxd07ds7bkhdzmil0lwzula2t97ob1m03qj5ffnv95pyrkg4fi8drsszowata5fni5p40jzwwy2o4uno5x0olwpw9iplaj0dh853e88nv92nbqh2n6rfrkmv4tao3gdnkl1bfz2nsnjio9cxrbbr8be8a1tnuav6137c9eox183si1gy6drivl8k == \z\g\v\d\w\i\f\q\a\x\o\r\b\j\k\4\7\e\t\b\p\4\l\6\l\g\w\u\5\e\f\k\g\s\y\d\l\r\h\l\m\b\q\f\4\d\w\3\m\p\k\s\t\g\7\l\8\4\w\a\c\l\t\4\b\0\1\j\w\s\m\p\n\u\d\i\3\n\k\2\j\8\u\p\7\s\9\f\p\o\l\1\4\1\q\4\0\8\g\w\4\4\a\q\8\a\1\2\i\f\u\5\i\4\j\r\6\0\s\d\d\m\s\3\5\0\2\e\d\9\j\0\z\c\r\u\i\5\d\n\b\l\z\g\a\u\5\o\z\r\y\c\u\j\b\x\d\x\o\n\7\a\i\r\9\q\q\g\g\9\h\p\l\0\e\d\b\m\i\f\9\a\x\v\a\1\p\9\j\z\m\u\8\r\i\8\p\7\m\8\x\z\5\x\a\4\r\m\z\a\f\z\i\7\r\g\5\9\m\7\o\f\b\t\l\1\y\1\o\f\l\a\6\9\s\z\t\p\t\5\u\g\i\v\s\m\l\6\j\i\l\4\k\7\r\h\n\l\y\3\c\m\3\8\p\9\2\p\3\o\6\n\k\x\0\2\5\8\8\e\v\o\n\6\t\j\w\v\e\n\0\u\5\j\2\7\5\6\x\y\u\w\6\f\p\1\2\f\6\q\m\9\z\2\q\c\f\m\g\g\2\f\r\k\e\a\f\z\9\n\k\i\m\s\q\5\q\w\u\c\b\2\v\a\0\7\y\c\q\n\2\y\e\0\w\k\i\q\w\y\6\m\4\c\9\v\8\v\a\a\j\z\q\g\w\e\o\u\1\n\9\y\v\q\y\a\g\v\h\k\z\w\u\t\v\1\e\1\8\d\b\v\d\m\6\o\r\d\1\p\z\y\l\8\4\w\i\d\j\n\w\1\n\w\s\4\f\z\6\n\1\9\5\u\t\r\b\5\y\z\5\f\q\i\c\k\c\5\p\v\1\t\c\z\0\l\c\t\z\g\m\f\r\y\y\y\9\d\9\4\7\z\e\f\6\d\x\u\a\z\a\k\r\l\o\5\i\o\w\f\o\e\b\n\g\h\a\m\r\0\0\l\i\m\8\r\c\7\w\h\b\a\b\9\3\j\2\m\0\9\g\u\n\2\w\s\t\b\x\q\o\o\7\x\p\c\x\a\i\j\3\n\p\z\e\c\g\g\t\g\y\t\p\e\q\a\l\c\c\p\l\1\a\l\6\4\i\p\w\c\o\o\b\o\2\6\u\k\m\t\t\j\k\3\4\v\b\y\c\n\4\k\8\1\a\0\o\1\9\k\7\n\y\k\d\v\d\2\d\r\6\d\t\k\b\p\2\i\w\e\l\v\6\n\j\c\x\h\k\1\4\2\b\7\m\m\7\p\o\7\q\a\u\6\7\f\3\g\q\g\r\t\6\4\3\n\1\h\j\q\7\m\s\e\y\s\m\h\e\7\y\e\9\j\p\r\d\s\7\0\7\b\c\b\u\q\o\7\k\q\y\p\b\c\r\b\z\0\0\h\c\z\l\z\0\b\w\k\2\k\n\u\x\k\7\6\1\p\8\b\0\h\q\i\9\t\4\m\p\x\v\y\d\x\l\i\4\r\q\c\z\3\1\q\4\c\k\o\z\z\s\g\1\r\t\c\m\8\a\j\t\6\u\q\c\r\k\i\7\b\h\1\y\p\o\3\m\3\4\m\5\x\e\q\b\3\g\6\0\5\p\s\4\u\i\h\n\b\c\h\x\k\r\p\4\s\z\m\0\v\a\4\9\w\p\f\5\z\1\9\m\z\v\7\2\s\j\6\x\v\y\p\j\2\d\a\r\z\7\n\f\x\d\0\7\d\s\7\b\k\h\d\z\m\i\l\0\l\w\z\u\l\a\2\t\9\7\o\b\1\m\0\3\q\j\5\f\f\n\v\9\5\p\y\r\k\g\4\f\i\8\d\r\s\s\z\o\w\a\t\a\5\f\n\i\5\p\4\0\j\z\w\w\y\2\o\4\u\n\o\5\x\0\o\l\w\p\w\9\i\p\l\a\j\0\d\h\8\5\3\e\8\8\n\v\9\2\n\b\q\h\2\n\6\r\f\r\k\m\v\4\t\a\o\3\g\d\n\k\l\1\b\f\z\2\n\s\n\j\i\o\9\c\x\r\b\b\r\8\b\e\8\a\1\t\n\u\a\v\6\1\3\7\c\9\e\o\x\1\8\3\s\i\1\g\y\6\d\r\i\v\l\8\k ]] 00:07:05.000 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:05.000 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ zgvdwifqaxorbjk47etbp4l6lgwu5efkgsydlrhlmbqf4dw3mpkstg7l84waclt4b01jwsmpnudi3nk2j8up7s9fpol141q408gw44aq8a12ifu5i4jr60sddms3502ed9j0zcrui5dnblzgau5ozrycujbxdxon7air9qqgg9hpl0edbmif9axva1p9jzmu8ri8p7m8xz5xa4rmzafzi7rg59m7ofbtl1y1ofla69sztpt5ugivsml6jil4k7rhnly3cm38p92p3o6nkx02588evon6tjwven0u5j2756xyuw6fp12f6qm9z2qcfmgg2frkeafz9nkimsq5qwucb2va07ycqn2ye0wkiqwy6m4c9v8vaajzqgweou1n9yvqyagvhkzwutv1e18dbvdm6ord1pzyl84widjnw1nws4fz6n195utrb5yz5fqickc5pv1tcz0lctzgmfryyy9d947zef6dxuazakrlo5iowfoebnghamr00lim8rc7whbab93j2m09gun2wstbxqoo7xpcxaij3npzecggtgytpeqalccpl1al64ipwcoobo26ukmttjk34vbycn4k81a0o19k7nykdvd2dr6dtkbp2iwelv6njcxhk142b7mm7po7qau67f3gqgrt643n1hjq7mseysmhe7ye9jprds707bcbuqo7kqypbcrbz00hczlz0bwk2knuxk761p8b0hqi9t4mpxvydxli4rqcz31q4ckozzsg1rtcm8ajt6uqcrki7bh1ypo3m34m5xeqb3g605ps4uihnbchxkrp4szm0va49wpf5z19mzv72sj6xvypj2darz7nfxd07ds7bkhdzmil0lwzula2t97ob1m03qj5ffnv95pyrkg4fi8drsszowata5fni5p40jzwwy2o4uno5x0olwpw9iplaj0dh853e88nv92nbqh2n6rfrkmv4tao3gdnkl1bfz2nsnjio9cxrbbr8be8a1tnuav6137c9eox183si1gy6drivl8k == \z\g\v\d\w\i\f\q\a\x\o\r\b\j\k\4\7\e\t\b\p\4\l\6\l\g\w\u\5\e\f\k\g\s\y\d\l\r\h\l\m\b\q\f\4\d\w\3\m\p\k\s\t\g\7\l\8\4\w\a\c\l\t\4\b\0\1\j\w\s\m\p\n\u\d\i\3\n\k\2\j\8\u\p\7\s\9\f\p\o\l\1\4\1\q\4\0\8\g\w\4\4\a\q\8\a\1\2\i\f\u\5\i\4\j\r\6\0\s\d\d\m\s\3\5\0\2\e\d\9\j\0\z\c\r\u\i\5\d\n\b\l\z\g\a\u\5\o\z\r\y\c\u\j\b\x\d\x\o\n\7\a\i\r\9\q\q\g\g\9\h\p\l\0\e\d\b\m\i\f\9\a\x\v\a\1\p\9\j\z\m\u\8\r\i\8\p\7\m\8\x\z\5\x\a\4\r\m\z\a\f\z\i\7\r\g\5\9\m\7\o\f\b\t\l\1\y\1\o\f\l\a\6\9\s\z\t\p\t\5\u\g\i\v\s\m\l\6\j\i\l\4\k\7\r\h\n\l\y\3\c\m\3\8\p\9\2\p\3\o\6\n\k\x\0\2\5\8\8\e\v\o\n\6\t\j\w\v\e\n\0\u\5\j\2\7\5\6\x\y\u\w\6\f\p\1\2\f\6\q\m\9\z\2\q\c\f\m\g\g\2\f\r\k\e\a\f\z\9\n\k\i\m\s\q\5\q\w\u\c\b\2\v\a\0\7\y\c\q\n\2\y\e\0\w\k\i\q\w\y\6\m\4\c\9\v\8\v\a\a\j\z\q\g\w\e\o\u\1\n\9\y\v\q\y\a\g\v\h\k\z\w\u\t\v\1\e\1\8\d\b\v\d\m\6\o\r\d\1\p\z\y\l\8\4\w\i\d\j\n\w\1\n\w\s\4\f\z\6\n\1\9\5\u\t\r\b\5\y\z\5\f\q\i\c\k\c\5\p\v\1\t\c\z\0\l\c\t\z\g\m\f\r\y\y\y\9\d\9\4\7\z\e\f\6\d\x\u\a\z\a\k\r\l\o\5\i\o\w\f\o\e\b\n\g\h\a\m\r\0\0\l\i\m\8\r\c\7\w\h\b\a\b\9\3\j\2\m\0\9\g\u\n\2\w\s\t\b\x\q\o\o\7\x\p\c\x\a\i\j\3\n\p\z\e\c\g\g\t\g\y\t\p\e\q\a\l\c\c\p\l\1\a\l\6\4\i\p\w\c\o\o\b\o\2\6\u\k\m\t\t\j\k\3\4\v\b\y\c\n\4\k\8\1\a\0\o\1\9\k\7\n\y\k\d\v\d\2\d\r\6\d\t\k\b\p\2\i\w\e\l\v\6\n\j\c\x\h\k\1\4\2\b\7\m\m\7\p\o\7\q\a\u\6\7\f\3\g\q\g\r\t\6\4\3\n\1\h\j\q\7\m\s\e\y\s\m\h\e\7\y\e\9\j\p\r\d\s\7\0\7\b\c\b\u\q\o\7\k\q\y\p\b\c\r\b\z\0\0\h\c\z\l\z\0\b\w\k\2\k\n\u\x\k\7\6\1\p\8\b\0\h\q\i\9\t\4\m\p\x\v\y\d\x\l\i\4\r\q\c\z\3\1\q\4\c\k\o\z\z\s\g\1\r\t\c\m\8\a\j\t\6\u\q\c\r\k\i\7\b\h\1\y\p\o\3\m\3\4\m\5\x\e\q\b\3\g\6\0\5\p\s\4\u\i\h\n\b\c\h\x\k\r\p\4\s\z\m\0\v\a\4\9\w\p\f\5\z\1\9\m\z\v\7\2\s\j\6\x\v\y\p\j\2\d\a\r\z\7\n\f\x\d\0\7\d\s\7\b\k\h\d\z\m\i\l\0\l\w\z\u\l\a\2\t\9\7\o\b\1\m\0\3\q\j\5\f\f\n\v\9\5\p\y\r\k\g\4\f\i\8\d\r\s\s\z\o\w\a\t\a\5\f\n\i\5\p\4\0\j\z\w\w\y\2\o\4\u\n\o\5\x\0\o\l\w\p\w\9\i\p\l\a\j\0\d\h\8\5\3\e\8\8\n\v\9\2\n\b\q\h\2\n\6\r\f\r\k\m\v\4\t\a\o\3\g\d\n\k\l\1\b\f\z\2\n\s\n\j\i\o\9\c\x\r\b\b\r\8\b\e\8\a\1\t\n\u\a\v\6\1\3\7\c\9\e\o\x\1\8\3\s\i\1\g\y\6\d\r\i\v\l\8\k ]] 00:07:05.000 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:05.593 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:05.593 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:05.593 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:05.593 09:12:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.593 { 00:07:05.593 "subsystems": [ 00:07:05.593 { 00:07:05.593 "subsystem": "bdev", 00:07:05.593 "config": [ 00:07:05.593 { 00:07:05.593 "params": { 00:07:05.593 "block_size": 512, 00:07:05.593 "num_blocks": 1048576, 00:07:05.593 "name": "malloc0" 00:07:05.593 }, 00:07:05.593 "method": "bdev_malloc_create" 00:07:05.593 }, 00:07:05.593 { 00:07:05.593 "params": { 00:07:05.593 "filename": "/dev/zram1", 00:07:05.593 "name": "uring0" 00:07:05.593 }, 00:07:05.593 "method": "bdev_uring_create" 00:07:05.593 }, 00:07:05.593 { 00:07:05.593 "method": "bdev_wait_for_examine" 00:07:05.593 } 00:07:05.593 ] 00:07:05.593 } 00:07:05.593 ] 00:07:05.593 } 00:07:05.593 [2024-10-08 09:12:57.008337] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:05.593 [2024-10-08 09:12:57.009018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61611 ] 00:07:05.593 [2024-10-08 09:12:57.147211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.593 [2024-10-08 09:12:57.266805] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.851 [2024-10-08 09:12:57.322949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.233  [2024-10-08T09:12:59.851Z] Copying: 167/512 [MB] (167 MBps) [2024-10-08T09:13:00.787Z] Copying: 324/512 [MB] (156 MBps) [2024-10-08T09:13:01.046Z] Copying: 462/512 [MB] (137 MBps) [2024-10-08T09:13:01.304Z] Copying: 512/512 [MB] (average 152 MBps) 00:07:09.621 00:07:09.621 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:09.621 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:09.621 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:09.621 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:09.621 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:09.879 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:09.879 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:09.879 09:13:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.879 [2024-10-08 09:13:01.360522] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:09.879 [2024-10-08 09:13:01.360654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61672 ] 00:07:09.879 { 00:07:09.879 "subsystems": [ 00:07:09.879 { 00:07:09.879 "subsystem": "bdev", 00:07:09.879 "config": [ 00:07:09.879 { 00:07:09.879 "params": { 00:07:09.879 "block_size": 512, 00:07:09.879 "num_blocks": 1048576, 00:07:09.879 "name": "malloc0" 00:07:09.879 }, 00:07:09.879 "method": "bdev_malloc_create" 00:07:09.879 }, 00:07:09.879 { 00:07:09.879 "params": { 00:07:09.879 "filename": "/dev/zram1", 00:07:09.879 "name": "uring0" 00:07:09.879 }, 00:07:09.879 "method": "bdev_uring_create" 00:07:09.879 }, 00:07:09.879 { 00:07:09.879 "params": { 00:07:09.879 "name": "uring0" 00:07:09.879 }, 00:07:09.879 "method": "bdev_uring_delete" 00:07:09.879 }, 00:07:09.879 { 00:07:09.879 "method": "bdev_wait_for_examine" 00:07:09.879 } 00:07:09.879 ] 00:07:09.879 } 00:07:09.879 ] 00:07:09.879 } 00:07:09.879 [2024-10-08 09:13:01.501668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.138 [2024-10-08 09:13:01.612969] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.138 [2024-10-08 09:13:01.672597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.397  [2024-10-08T09:13:02.339Z] Copying: 0/0 [B] (average 0 Bps) 00:07:10.656 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:10.656 09:13:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:10.915 [2024-10-08 09:13:02.376953] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:10.915 [2024-10-08 09:13:02.378050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61701 ] 00:07:10.915 { 00:07:10.916 "subsystems": [ 00:07:10.916 { 00:07:10.916 "subsystem": "bdev", 00:07:10.916 "config": [ 00:07:10.916 { 00:07:10.916 "params": { 00:07:10.916 "block_size": 512, 00:07:10.916 "num_blocks": 1048576, 00:07:10.916 "name": "malloc0" 00:07:10.916 }, 00:07:10.916 "method": "bdev_malloc_create" 00:07:10.916 }, 00:07:10.916 { 00:07:10.916 "params": { 00:07:10.916 "filename": "/dev/zram1", 00:07:10.916 "name": "uring0" 00:07:10.916 }, 00:07:10.916 "method": "bdev_uring_create" 00:07:10.916 }, 00:07:10.916 { 00:07:10.916 "params": { 00:07:10.916 "name": "uring0" 00:07:10.916 }, 00:07:10.916 "method": "bdev_uring_delete" 00:07:10.916 }, 00:07:10.916 { 00:07:10.916 "method": "bdev_wait_for_examine" 00:07:10.916 } 00:07:10.916 ] 00:07:10.916 } 00:07:10.916 ] 00:07:10.916 } 00:07:10.916 [2024-10-08 09:13:02.519201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.174 [2024-10-08 09:13:02.608469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.175 [2024-10-08 09:13:02.662876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.434 [2024-10-08 09:13:02.872849] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:11.434 [2024-10-08 09:13:02.872902] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:11.434 [2024-10-08 09:13:02.872914] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:11.434 [2024-10-08 09:13:02.872924] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.692 [2024-10-08 09:13:03.218054] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:11.693 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:11.951 ************************************ 00:07:11.951 END TEST dd_uring_copy 00:07:11.951 ************************************ 00:07:11.951 00:07:11.951 real 0m16.396s 00:07:11.951 user 0m11.152s 00:07:11.951 sys 0m13.289s 00:07:11.951 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.951 09:13:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.951 ************************************ 00:07:11.951 END TEST spdk_dd_uring 00:07:11.951 ************************************ 00:07:11.951 00:07:11.951 real 0m16.713s 00:07:11.951 user 0m11.340s 00:07:11.951 sys 0m13.418s 00:07:11.951 09:13:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.951 09:13:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:12.211 09:13:03 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:12.211 09:13:03 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.211 09:13:03 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.211 09:13:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:12.211 ************************************ 00:07:12.211 START TEST spdk_dd_sparse 00:07:12.211 ************************************ 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:12.211 * Looking for test storage... 00:07:12.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.211 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.212 --rc genhtml_branch_coverage=1 00:07:12.212 --rc genhtml_function_coverage=1 00:07:12.212 --rc genhtml_legend=1 00:07:12.212 --rc geninfo_all_blocks=1 00:07:12.212 --rc geninfo_unexecuted_blocks=1 00:07:12.212 00:07:12.212 ' 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.212 --rc genhtml_branch_coverage=1 00:07:12.212 --rc genhtml_function_coverage=1 00:07:12.212 --rc genhtml_legend=1 00:07:12.212 --rc geninfo_all_blocks=1 00:07:12.212 --rc geninfo_unexecuted_blocks=1 00:07:12.212 00:07:12.212 ' 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.212 --rc genhtml_branch_coverage=1 00:07:12.212 --rc genhtml_function_coverage=1 00:07:12.212 --rc genhtml_legend=1 00:07:12.212 --rc geninfo_all_blocks=1 00:07:12.212 --rc geninfo_unexecuted_blocks=1 00:07:12.212 00:07:12.212 ' 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.212 --rc genhtml_branch_coverage=1 00:07:12.212 --rc genhtml_function_coverage=1 00:07:12.212 --rc genhtml_legend=1 00:07:12.212 --rc geninfo_all_blocks=1 00:07:12.212 --rc geninfo_unexecuted_blocks=1 00:07:12.212 00:07:12.212 ' 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:12.212 1+0 records in 00:07:12.212 1+0 records out 00:07:12.212 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00662477 s, 633 MB/s 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:12.212 1+0 records in 00:07:12.212 1+0 records out 00:07:12.212 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00779751 s, 538 MB/s 00:07:12.212 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:12.471 1+0 records in 00:07:12.471 1+0 records out 00:07:12.471 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00827115 s, 507 MB/s 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:12.471 ************************************ 00:07:12.471 START TEST dd_sparse_file_to_file 00:07:12.471 ************************************ 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:12.471 09:13:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:12.471 [2024-10-08 09:13:03.970919] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:12.471 [2024-10-08 09:13:03.971178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61801 ] 00:07:12.471 { 00:07:12.471 "subsystems": [ 00:07:12.471 { 00:07:12.471 "subsystem": "bdev", 00:07:12.471 "config": [ 00:07:12.471 { 00:07:12.471 "params": { 00:07:12.471 "block_size": 4096, 00:07:12.471 "filename": "dd_sparse_aio_disk", 00:07:12.471 "name": "dd_aio" 00:07:12.471 }, 00:07:12.471 "method": "bdev_aio_create" 00:07:12.471 }, 00:07:12.471 { 00:07:12.471 "params": { 00:07:12.471 "lvs_name": "dd_lvstore", 00:07:12.471 "bdev_name": "dd_aio" 00:07:12.471 }, 00:07:12.471 "method": "bdev_lvol_create_lvstore" 00:07:12.471 }, 00:07:12.471 { 00:07:12.471 "method": "bdev_wait_for_examine" 00:07:12.471 } 00:07:12.471 ] 00:07:12.471 } 00:07:12.471 ] 00:07:12.471 } 00:07:12.471 [2024-10-08 09:13:04.111047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.730 [2024-10-08 09:13:04.243168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.730 [2024-10-08 09:13:04.300671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.730  [2024-10-08T09:13:04.978Z] Copying: 12/36 [MB] (average 750 MBps) 00:07:13.295 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:13.295 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:13.296 00:07:13.296 real 0m0.807s 00:07:13.296 user 0m0.533s 00:07:13.296 sys 0m0.385s 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:13.296 ************************************ 00:07:13.296 END TEST dd_sparse_file_to_file 00:07:13.296 ************************************ 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:13.296 ************************************ 00:07:13.296 START TEST dd_sparse_file_to_bdev 00:07:13.296 ************************************ 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:13.296 09:13:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.296 [2024-10-08 09:13:04.821553] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:13.296 [2024-10-08 09:13:04.821942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61849 ] 00:07:13.296 { 00:07:13.296 "subsystems": [ 00:07:13.296 { 00:07:13.296 "subsystem": "bdev", 00:07:13.296 "config": [ 00:07:13.296 { 00:07:13.296 "params": { 00:07:13.296 "block_size": 4096, 00:07:13.296 "filename": "dd_sparse_aio_disk", 00:07:13.296 "name": "dd_aio" 00:07:13.296 }, 00:07:13.296 "method": "bdev_aio_create" 00:07:13.296 }, 00:07:13.296 { 00:07:13.296 "params": { 00:07:13.296 "lvs_name": "dd_lvstore", 00:07:13.296 "lvol_name": "dd_lvol", 00:07:13.296 "size_in_mib": 36, 00:07:13.296 "thin_provision": true 00:07:13.296 }, 00:07:13.296 "method": "bdev_lvol_create" 00:07:13.296 }, 00:07:13.296 { 00:07:13.296 "method": "bdev_wait_for_examine" 00:07:13.296 } 00:07:13.296 ] 00:07:13.296 } 00:07:13.296 ] 00:07:13.296 } 00:07:13.296 [2024-10-08 09:13:04.962350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.554 [2024-10-08 09:13:05.107998] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.554 [2024-10-08 09:13:05.163757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.813  [2024-10-08T09:13:05.754Z] Copying: 12/36 [MB] (average 571 MBps) 00:07:14.071 00:07:14.071 ************************************ 00:07:14.071 END TEST dd_sparse_file_to_bdev 00:07:14.071 ************************************ 00:07:14.071 00:07:14.071 real 0m0.748s 00:07:14.071 user 0m0.513s 00:07:14.071 sys 0m0.355s 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:14.071 ************************************ 00:07:14.071 START TEST dd_sparse_bdev_to_file 00:07:14.071 ************************************ 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:14.071 09:13:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:14.071 [2024-10-08 09:13:05.619483] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:14.071 [2024-10-08 09:13:05.619609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61887 ] 00:07:14.071 { 00:07:14.071 "subsystems": [ 00:07:14.071 { 00:07:14.071 "subsystem": "bdev", 00:07:14.071 "config": [ 00:07:14.071 { 00:07:14.071 "params": { 00:07:14.071 "block_size": 4096, 00:07:14.071 "filename": "dd_sparse_aio_disk", 00:07:14.071 "name": "dd_aio" 00:07:14.071 }, 00:07:14.071 "method": "bdev_aio_create" 00:07:14.071 }, 00:07:14.071 { 00:07:14.071 "method": "bdev_wait_for_examine" 00:07:14.071 } 00:07:14.071 ] 00:07:14.071 } 00:07:14.071 ] 00:07:14.071 } 00:07:14.330 [2024-10-08 09:13:05.756794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.330 [2024-10-08 09:13:05.905447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.330 [2024-10-08 09:13:05.962185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.588  [2024-10-08T09:13:06.530Z] Copying: 12/36 [MB] (average 1200 MBps) 00:07:14.847 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:14.847 00:07:14.847 real 0m0.798s 00:07:14.847 user 0m0.539s 00:07:14.847 sys 0m0.370s 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.847 ************************************ 00:07:14.847 END TEST dd_sparse_bdev_to_file 00:07:14.847 ************************************ 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:14.847 09:13:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:14.847 ************************************ 00:07:14.847 END TEST spdk_dd_sparse 00:07:14.847 ************************************ 00:07:14.847 00:07:14.847 real 0m2.752s 00:07:14.847 user 0m1.761s 00:07:14.847 sys 0m1.325s 00:07:14.848 09:13:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.848 09:13:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:14.848 09:13:06 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:14.848 09:13:06 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.848 09:13:06 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.848 09:13:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.848 ************************************ 00:07:14.848 START TEST spdk_dd_negative 00:07:14.848 ************************************ 00:07:14.848 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:15.107 * Looking for test storage... 00:07:15.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:15.107 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:15.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.108 --rc genhtml_branch_coverage=1 00:07:15.108 --rc genhtml_function_coverage=1 00:07:15.108 --rc genhtml_legend=1 00:07:15.108 --rc geninfo_all_blocks=1 00:07:15.108 --rc geninfo_unexecuted_blocks=1 00:07:15.108 00:07:15.108 ' 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:15.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.108 --rc genhtml_branch_coverage=1 00:07:15.108 --rc genhtml_function_coverage=1 00:07:15.108 --rc genhtml_legend=1 00:07:15.108 --rc geninfo_all_blocks=1 00:07:15.108 --rc geninfo_unexecuted_blocks=1 00:07:15.108 00:07:15.108 ' 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:15.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.108 --rc genhtml_branch_coverage=1 00:07:15.108 --rc genhtml_function_coverage=1 00:07:15.108 --rc genhtml_legend=1 00:07:15.108 --rc geninfo_all_blocks=1 00:07:15.108 --rc geninfo_unexecuted_blocks=1 00:07:15.108 00:07:15.108 ' 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:15.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.108 --rc genhtml_branch_coverage=1 00:07:15.108 --rc genhtml_function_coverage=1 00:07:15.108 --rc genhtml_legend=1 00:07:15.108 --rc geninfo_all_blocks=1 00:07:15.108 --rc geninfo_unexecuted_blocks=1 00:07:15.108 00:07:15.108 ' 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.108 ************************************ 00:07:15.108 START TEST dd_invalid_arguments 00:07:15.108 ************************************ 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.108 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:15.108 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:15.108 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:15.108 00:07:15.108 CPU options: 00:07:15.108 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:15.108 (like [0,1,10]) 00:07:15.108 --lcores lcore to CPU mapping list. The list is in the format: 00:07:15.108 [<,lcores[@CPUs]>...] 00:07:15.108 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:15.108 Within the group, '-' is used for range separator, 00:07:15.108 ',' is used for single number separator. 00:07:15.108 '( )' can be omitted for single element group, 00:07:15.108 '@' can be omitted if cpus and lcores have the same value 00:07:15.108 --disable-cpumask-locks Disable CPU core lock files. 00:07:15.108 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:15.108 pollers in the app support interrupt mode) 00:07:15.108 -p, --main-core main (primary) core for DPDK 00:07:15.108 00:07:15.108 Configuration options: 00:07:15.108 -c, --config, --json JSON config file 00:07:15.108 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:15.108 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:15.108 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:15.108 --rpcs-allowed comma-separated list of permitted RPCS 00:07:15.108 --json-ignore-init-errors don't exit on invalid config entry 00:07:15.108 00:07:15.108 Memory options: 00:07:15.108 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:15.108 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:15.108 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:15.108 -R, --huge-unlink unlink huge files after initialization 00:07:15.108 -n, --mem-channels number of memory channels used for DPDK 00:07:15.108 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:15.108 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:15.108 --no-huge run without using hugepages 00:07:15.108 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:15.108 -i, --shm-id shared memory ID (optional) 00:07:15.108 -g, --single-file-segments force creating just one hugetlbfs file 00:07:15.108 00:07:15.108 PCI options: 00:07:15.108 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:15.108 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:15.108 -u, --no-pci disable PCI access 00:07:15.108 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:15.108 00:07:15.108 Log options: 00:07:15.108 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:15.108 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:15.108 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:15.108 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:15.109 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:15.109 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:15.109 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:15.109 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:15.109 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:15.109 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:15.109 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:15.109 --silence-noticelog disable notice level logging to stderr 00:07:15.109 00:07:15.109 Trace options: 00:07:15.109 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:15.109 setting 0 to disable trace (default 32768) 00:07:15.109 Tracepoints vary in size and can use more than one trace entry. 00:07:15.109 -e, --tpoint-group [:] 00:07:15.109 [2024-10-08 09:13:06.754106] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:15.109 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:15.109 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:15.109 bdev_raid, scheduler, all). 00:07:15.109 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:15.109 a tracepoint group. First tpoint inside a group can be enabled by 00:07:15.109 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:15.109 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:15.109 in /include/spdk_internal/trace_defs.h 00:07:15.109 00:07:15.109 Other options: 00:07:15.109 -h, --help show this usage 00:07:15.109 -v, --version print SPDK version 00:07:15.109 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:15.109 --env-context Opaque context for use of the env implementation 00:07:15.109 00:07:15.109 Application specific: 00:07:15.109 [--------- DD Options ---------] 00:07:15.109 --if Input file. Must specify either --if or --ib. 00:07:15.109 --ib Input bdev. Must specifier either --if or --ib 00:07:15.109 --of Output file. Must specify either --of or --ob. 00:07:15.109 --ob Output bdev. Must specify either --of or --ob. 00:07:15.109 --iflag Input file flags. 00:07:15.109 --oflag Output file flags. 00:07:15.109 --bs I/O unit size (default: 4096) 00:07:15.109 --qd Queue depth (default: 2) 00:07:15.109 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:15.109 --skip Skip this many I/O units at start of input. (default: 0) 00:07:15.109 --seek Skip this many I/O units at start of output. (default: 0) 00:07:15.109 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:15.109 --sparse Enable hole skipping in input target 00:07:15.109 Available iflag and oflag values: 00:07:15.109 append - append mode 00:07:15.109 direct - use direct I/O for data 00:07:15.109 directory - fail unless a directory 00:07:15.109 dsync - use synchronized I/O for data 00:07:15.109 noatime - do not update access time 00:07:15.109 noctty - do not assign controlling terminal from file 00:07:15.109 nofollow - do not follow symlinks 00:07:15.109 nonblock - use non-blocking I/O 00:07:15.109 sync - use synchronized I/O for data and metadata 00:07:15.109 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:15.109 ************************************ 00:07:15.109 END TEST dd_invalid_arguments 00:07:15.109 ************************************ 00:07:15.109 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.109 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.109 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.109 00:07:15.109 real 0m0.088s 00:07:15.109 user 0m0.048s 00:07:15.109 sys 0m0.038s 00:07:15.109 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.109 09:13:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.368 ************************************ 00:07:15.368 START TEST dd_double_input 00:07:15.368 ************************************ 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:15.368 [2024-10-08 09:13:06.890177] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.368 00:07:15.368 real 0m0.082s 00:07:15.368 user 0m0.057s 00:07:15.368 sys 0m0.023s 00:07:15.368 ************************************ 00:07:15.368 END TEST dd_double_input 00:07:15.368 ************************************ 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.368 ************************************ 00:07:15.368 START TEST dd_double_output 00:07:15.368 ************************************ 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.368 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.369 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.369 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.369 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.369 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.369 09:13:06 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:15.369 [2024-10-08 09:13:07.025993] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:15.369 09:13:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:15.369 09:13:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.369 09:13:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.369 09:13:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.369 00:07:15.369 real 0m0.082s 00:07:15.369 user 0m0.050s 00:07:15.369 sys 0m0.030s 00:07:15.369 09:13:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.369 ************************************ 00:07:15.369 END TEST dd_double_output 00:07:15.369 ************************************ 00:07:15.369 09:13:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:15.627 09:13:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:15.627 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.627 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.627 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.627 ************************************ 00:07:15.627 START TEST dd_no_input 00:07:15.627 ************************************ 00:07:15.627 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:15.627 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:15.627 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:15.628 [2024-10-08 09:13:07.149275] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.628 00:07:15.628 real 0m0.079s 00:07:15.628 user 0m0.055s 00:07:15.628 sys 0m0.022s 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.628 ************************************ 00:07:15.628 END TEST dd_no_input 00:07:15.628 ************************************ 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.628 ************************************ 00:07:15.628 START TEST dd_no_output 00:07:15.628 ************************************ 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.628 [2024-10-08 09:13:07.282815] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.628 00:07:15.628 real 0m0.078s 00:07:15.628 user 0m0.043s 00:07:15.628 sys 0m0.034s 00:07:15.628 ************************************ 00:07:15.628 END TEST dd_no_output 00:07:15.628 ************************************ 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.628 09:13:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.887 ************************************ 00:07:15.887 START TEST dd_wrong_blocksize 00:07:15.887 ************************************ 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:15.887 [2024-10-08 09:13:07.415303] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.887 ************************************ 00:07:15.887 END TEST dd_wrong_blocksize 00:07:15.887 ************************************ 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.887 00:07:15.887 real 0m0.078s 00:07:15.887 user 0m0.047s 00:07:15.887 sys 0m0.031s 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.887 ************************************ 00:07:15.887 START TEST dd_smaller_blocksize 00:07:15.887 ************************************ 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.887 09:13:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:15.887 [2024-10-08 09:13:07.551292] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:15.887 [2024-10-08 09:13:07.551548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62108 ] 00:07:16.147 [2024-10-08 09:13:07.689235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.147 [2024-10-08 09:13:07.792475] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.406 [2024-10-08 09:13:07.852465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.666 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:16.939 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:16.939 [2024-10-08 09:13:08.425176] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:16.939 [2024-10-08 09:13:08.425386] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.939 [2024-10-08 09:13:08.539617] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.210 00:07:17.210 real 0m1.142s 00:07:17.210 user 0m0.450s 00:07:17.210 sys 0m0.584s 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.210 ************************************ 00:07:17.210 END TEST dd_smaller_blocksize 00:07:17.210 ************************************ 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.210 ************************************ 00:07:17.210 START TEST dd_invalid_count 00:07:17.210 ************************************ 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:17.210 [2024-10-08 09:13:08.738217] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.210 00:07:17.210 real 0m0.076s 00:07:17.210 user 0m0.048s 00:07:17.210 sys 0m0.027s 00:07:17.210 ************************************ 00:07:17.210 END TEST dd_invalid_count 00:07:17.210 ************************************ 00:07:17.210 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.211 ************************************ 00:07:17.211 START TEST dd_invalid_oflag 00:07:17.211 ************************************ 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:17.211 [2024-10-08 09:13:08.863928] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.211 00:07:17.211 real 0m0.073s 00:07:17.211 user 0m0.040s 00:07:17.211 sys 0m0.031s 00:07:17.211 ************************************ 00:07:17.211 END TEST dd_invalid_oflag 00:07:17.211 ************************************ 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.211 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 ************************************ 00:07:17.470 START TEST dd_invalid_iflag 00:07:17.470 ************************************ 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.470 09:13:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:17.470 [2024-10-08 09:13:08.992436] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.470 ************************************ 00:07:17.470 END TEST dd_invalid_iflag 00:07:17.470 ************************************ 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.470 00:07:17.470 real 0m0.076s 00:07:17.470 user 0m0.042s 00:07:17.470 sys 0m0.032s 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 ************************************ 00:07:17.470 START TEST dd_unknown_flag 00:07:17.470 ************************************ 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.470 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:17.470 [2024-10-08 09:13:09.119412] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:17.470 [2024-10-08 09:13:09.119497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62211 ] 00:07:17.729 [2024-10-08 09:13:09.258156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.729 [2024-10-08 09:13:09.355140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.988 [2024-10-08 09:13:09.412437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.988 [2024-10-08 09:13:09.446826] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:17.988 [2024-10-08 09:13:09.446894] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.988 [2024-10-08 09:13:09.446964] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:17.988 [2024-10-08 09:13:09.446977] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.988 [2024-10-08 09:13:09.447216] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:17.988 [2024-10-08 09:13:09.447232] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.988 [2024-10-08 09:13:09.447276] app.c:1047:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:17.988 [2024-10-08 09:13:09.447285] app.c:1047:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:17.988 [2024-10-08 09:13:09.557088] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.988 00:07:17.988 real 0m0.592s 00:07:17.988 user 0m0.329s 00:07:17.988 sys 0m0.169s 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.988 ************************************ 00:07:17.988 END TEST dd_unknown_flag 00:07:17.988 ************************************ 00:07:17.988 09:13:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.247 ************************************ 00:07:18.247 START TEST dd_invalid_json 00:07:18.247 ************************************ 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.247 09:13:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:18.247 [2024-10-08 09:13:09.765006] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:18.247 [2024-10-08 09:13:09.765254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62244 ] 00:07:18.247 [2024-10-08 09:13:09.902561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.507 [2024-10-08 09:13:09.984570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.507 [2024-10-08 09:13:09.984656] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:18.507 [2024-10-08 09:13:09.984670] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:18.507 [2024-10-08 09:13:09.984679] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.507 [2024-10-08 09:13:09.984723] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.507 00:07:18.507 real 0m0.355s 00:07:18.507 user 0m0.187s 00:07:18.507 sys 0m0.066s 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.507 ************************************ 00:07:18.507 END TEST dd_invalid_json 00:07:18.507 ************************************ 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.507 ************************************ 00:07:18.507 START TEST dd_invalid_seek 00:07:18.507 ************************************ 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.507 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:18.507 [2024-10-08 09:13:10.177352] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:18.507 [2024-10-08 09:13:10.177446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62269 ] 00:07:18.507 { 00:07:18.507 "subsystems": [ 00:07:18.507 { 00:07:18.507 "subsystem": "bdev", 00:07:18.507 "config": [ 00:07:18.507 { 00:07:18.507 "params": { 00:07:18.507 "block_size": 512, 00:07:18.507 "num_blocks": 512, 00:07:18.507 "name": "malloc0" 00:07:18.507 }, 00:07:18.507 "method": "bdev_malloc_create" 00:07:18.507 }, 00:07:18.507 { 00:07:18.507 "params": { 00:07:18.507 "block_size": 512, 00:07:18.507 "num_blocks": 512, 00:07:18.507 "name": "malloc1" 00:07:18.507 }, 00:07:18.507 "method": "bdev_malloc_create" 00:07:18.507 }, 00:07:18.507 { 00:07:18.507 "method": "bdev_wait_for_examine" 00:07:18.507 } 00:07:18.507 ] 00:07:18.507 } 00:07:18.507 ] 00:07:18.507 } 00:07:18.766 [2024-10-08 09:13:10.315316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.766 [2024-10-08 09:13:10.391469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.766 [2024-10-08 09:13:10.445821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.025 [2024-10-08 09:13:10.507389] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:19.025 [2024-10-08 09:13:10.507465] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.025 [2024-10-08 09:13:10.620573] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.025 00:07:19.025 real 0m0.590s 00:07:19.025 user 0m0.380s 00:07:19.025 sys 0m0.169s 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.025 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:19.025 ************************************ 00:07:19.025 END TEST dd_invalid_seek 00:07:19.025 ************************************ 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.285 ************************************ 00:07:19.285 START TEST dd_invalid_skip 00:07:19.285 ************************************ 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.285 09:13:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:19.285 { 00:07:19.285 "subsystems": [ 00:07:19.285 { 00:07:19.285 "subsystem": "bdev", 00:07:19.285 "config": [ 00:07:19.285 { 00:07:19.285 "params": { 00:07:19.285 "block_size": 512, 00:07:19.285 "num_blocks": 512, 00:07:19.285 "name": "malloc0" 00:07:19.285 }, 00:07:19.285 "method": "bdev_malloc_create" 00:07:19.285 }, 00:07:19.285 { 00:07:19.285 "params": { 00:07:19.285 "block_size": 512, 00:07:19.285 "num_blocks": 512, 00:07:19.285 "name": "malloc1" 00:07:19.285 }, 00:07:19.285 "method": "bdev_malloc_create" 00:07:19.285 }, 00:07:19.285 { 00:07:19.285 "method": "bdev_wait_for_examine" 00:07:19.285 } 00:07:19.285 ] 00:07:19.285 } 00:07:19.285 ] 00:07:19.285 } 00:07:19.285 [2024-10-08 09:13:10.823316] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:19.285 [2024-10-08 09:13:10.823410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62307 ] 00:07:19.285 [2024-10-08 09:13:10.959457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.544 [2024-10-08 09:13:11.041432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.544 [2024-10-08 09:13:11.094707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.544 [2024-10-08 09:13:11.153061] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:19.544 [2024-10-08 09:13:11.153131] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.803 [2024-10-08 09:13:11.266141] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.803 ************************************ 00:07:19.803 END TEST dd_invalid_skip 00:07:19.803 ************************************ 00:07:19.803 00:07:19.803 real 0m0.593s 00:07:19.803 user 0m0.381s 00:07:19.803 sys 0m0.168s 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.803 ************************************ 00:07:19.803 START TEST dd_invalid_input_count 00:07:19.803 ************************************ 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:19.803 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.804 09:13:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:19.804 [2024-10-08 09:13:11.467964] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:19.804 [2024-10-08 09:13:11.468068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62336 ] 00:07:19.804 { 00:07:19.804 "subsystems": [ 00:07:19.804 { 00:07:19.804 "subsystem": "bdev", 00:07:19.804 "config": [ 00:07:19.804 { 00:07:19.804 "params": { 00:07:19.804 "block_size": 512, 00:07:19.804 "num_blocks": 512, 00:07:19.804 "name": "malloc0" 00:07:19.804 }, 00:07:19.804 "method": "bdev_malloc_create" 00:07:19.804 }, 00:07:19.804 { 00:07:19.804 "params": { 00:07:19.804 "block_size": 512, 00:07:19.804 "num_blocks": 512, 00:07:19.804 "name": "malloc1" 00:07:19.804 }, 00:07:19.804 "method": "bdev_malloc_create" 00:07:19.804 }, 00:07:19.804 { 00:07:19.804 "method": "bdev_wait_for_examine" 00:07:19.804 } 00:07:19.804 ] 00:07:19.804 } 00:07:19.804 ] 00:07:19.804 } 00:07:20.063 [2024-10-08 09:13:11.606407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.063 [2024-10-08 09:13:11.692278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.063 [2024-10-08 09:13:11.744669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.322 [2024-10-08 09:13:11.803531] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:20.322 [2024-10-08 09:13:11.803857] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.322 [2024-10-08 09:13:11.927665] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.581 00:07:20.581 real 0m0.615s 00:07:20.581 user 0m0.396s 00:07:20.581 sys 0m0.178s 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.581 ************************************ 00:07:20.581 END TEST dd_invalid_input_count 00:07:20.581 ************************************ 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.581 ************************************ 00:07:20.581 START TEST dd_invalid_output_count 00:07:20.581 ************************************ 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.581 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:20.581 { 00:07:20.581 "subsystems": [ 00:07:20.581 { 00:07:20.581 "subsystem": "bdev", 00:07:20.581 "config": [ 00:07:20.581 { 00:07:20.581 "params": { 00:07:20.581 "block_size": 512, 00:07:20.581 "num_blocks": 512, 00:07:20.581 "name": "malloc0" 00:07:20.581 }, 00:07:20.581 "method": "bdev_malloc_create" 00:07:20.581 }, 00:07:20.581 { 00:07:20.581 "method": "bdev_wait_for_examine" 00:07:20.581 } 00:07:20.581 ] 00:07:20.581 } 00:07:20.581 ] 00:07:20.581 } 00:07:20.581 [2024-10-08 09:13:12.134659] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:20.581 [2024-10-08 09:13:12.134775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62375 ] 00:07:20.841 [2024-10-08 09:13:12.270966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.841 [2024-10-08 09:13:12.349779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.841 [2024-10-08 09:13:12.401667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.841 [2024-10-08 09:13:12.452581] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:20.841 [2024-10-08 09:13:12.452654] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.100 [2024-10-08 09:13:12.566174] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.100 ************************************ 00:07:21.100 00:07:21.100 real 0m0.583s 00:07:21.100 user 0m0.365s 00:07:21.100 sys 0m0.166s 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:21.100 END TEST dd_invalid_output_count 00:07:21.100 ************************************ 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.100 ************************************ 00:07:21.100 START TEST dd_bs_not_multiple 00:07:21.100 ************************************ 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.100 09:13:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:21.100 { 00:07:21.100 "subsystems": [ 00:07:21.100 { 00:07:21.100 "subsystem": "bdev", 00:07:21.100 "config": [ 00:07:21.100 { 00:07:21.100 "params": { 00:07:21.100 "block_size": 512, 00:07:21.100 "num_blocks": 512, 00:07:21.100 "name": "malloc0" 00:07:21.100 }, 00:07:21.100 "method": "bdev_malloc_create" 00:07:21.100 }, 00:07:21.100 { 00:07:21.100 "params": { 00:07:21.100 "block_size": 512, 00:07:21.100 "num_blocks": 512, 00:07:21.100 "name": "malloc1" 00:07:21.100 }, 00:07:21.100 "method": "bdev_malloc_create" 00:07:21.100 }, 00:07:21.100 { 00:07:21.100 "method": "bdev_wait_for_examine" 00:07:21.100 } 00:07:21.100 ] 00:07:21.100 } 00:07:21.100 ] 00:07:21.100 } 00:07:21.100 [2024-10-08 09:13:12.770202] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:21.100 [2024-10-08 09:13:12.770321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62407 ] 00:07:21.359 [2024-10-08 09:13:12.905111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.359 [2024-10-08 09:13:12.999537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.618 [2024-10-08 09:13:13.055346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.618 [2024-10-08 09:13:13.113513] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:21.618 [2024-10-08 09:13:13.113571] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.618 [2024-10-08 09:13:13.223533] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.877 00:07:21.877 real 0m0.604s 00:07:21.877 user 0m0.387s 00:07:21.877 sys 0m0.175s 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.877 09:13:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:21.877 ************************************ 00:07:21.877 END TEST dd_bs_not_multiple 00:07:21.877 ************************************ 00:07:21.877 ************************************ 00:07:21.877 END TEST spdk_dd_negative 00:07:21.877 ************************************ 00:07:21.877 00:07:21.877 real 0m6.882s 00:07:21.877 user 0m3.688s 00:07:21.877 sys 0m2.585s 00:07:21.878 09:13:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.878 09:13:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.878 ************************************ 00:07:21.878 END TEST spdk_dd 00:07:21.878 ************************************ 00:07:21.878 00:07:21.878 real 1m25.840s 00:07:21.878 user 0m55.798s 00:07:21.878 sys 0m36.626s 00:07:21.878 09:13:13 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.878 09:13:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:21.878 09:13:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:21.878 09:13:13 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:21.878 09:13:13 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:21.878 09:13:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:21.878 09:13:13 -- common/autotest_common.sh@10 -- # set +x 00:07:21.878 09:13:13 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:21.878 09:13:13 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:21.878 09:13:13 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:21.878 09:13:13 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:21.878 09:13:13 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:21.878 09:13:13 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:21.878 09:13:13 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.878 09:13:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.878 09:13:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.878 09:13:13 -- common/autotest_common.sh@10 -- # set +x 00:07:21.878 ************************************ 00:07:21.878 START TEST nvmf_tcp 00:07:21.878 ************************************ 00:07:21.878 09:13:13 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.137 * Looking for test storage... 00:07:22.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.137 09:13:13 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.137 --rc genhtml_branch_coverage=1 00:07:22.137 --rc genhtml_function_coverage=1 00:07:22.137 --rc genhtml_legend=1 00:07:22.137 --rc geninfo_all_blocks=1 00:07:22.137 --rc geninfo_unexecuted_blocks=1 00:07:22.137 00:07:22.137 ' 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.137 --rc genhtml_branch_coverage=1 00:07:22.137 --rc genhtml_function_coverage=1 00:07:22.137 --rc genhtml_legend=1 00:07:22.137 --rc geninfo_all_blocks=1 00:07:22.137 --rc geninfo_unexecuted_blocks=1 00:07:22.137 00:07:22.137 ' 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.137 --rc genhtml_branch_coverage=1 00:07:22.137 --rc genhtml_function_coverage=1 00:07:22.137 --rc genhtml_legend=1 00:07:22.137 --rc geninfo_all_blocks=1 00:07:22.137 --rc geninfo_unexecuted_blocks=1 00:07:22.137 00:07:22.137 ' 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.137 --rc genhtml_branch_coverage=1 00:07:22.137 --rc genhtml_function_coverage=1 00:07:22.137 --rc genhtml_legend=1 00:07:22.137 --rc geninfo_all_blocks=1 00:07:22.137 --rc geninfo_unexecuted_blocks=1 00:07:22.137 00:07:22.137 ' 00:07:22.137 09:13:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:22.137 09:13:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:22.137 09:13:13 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.137 09:13:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.137 ************************************ 00:07:22.137 START TEST nvmf_target_core 00:07:22.137 ************************************ 00:07:22.137 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:22.137 * Looking for test storage... 00:07:22.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:22.137 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.137 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.137 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.397 --rc genhtml_branch_coverage=1 00:07:22.397 --rc genhtml_function_coverage=1 00:07:22.397 --rc genhtml_legend=1 00:07:22.397 --rc geninfo_all_blocks=1 00:07:22.397 --rc geninfo_unexecuted_blocks=1 00:07:22.397 00:07:22.397 ' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.397 --rc genhtml_branch_coverage=1 00:07:22.397 --rc genhtml_function_coverage=1 00:07:22.397 --rc genhtml_legend=1 00:07:22.397 --rc geninfo_all_blocks=1 00:07:22.397 --rc geninfo_unexecuted_blocks=1 00:07:22.397 00:07:22.397 ' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.397 --rc genhtml_branch_coverage=1 00:07:22.397 --rc genhtml_function_coverage=1 00:07:22.397 --rc genhtml_legend=1 00:07:22.397 --rc geninfo_all_blocks=1 00:07:22.397 --rc geninfo_unexecuted_blocks=1 00:07:22.397 00:07:22.397 ' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.397 --rc genhtml_branch_coverage=1 00:07:22.397 --rc genhtml_function_coverage=1 00:07:22.397 --rc genhtml_legend=1 00:07:22.397 --rc geninfo_all_blocks=1 00:07:22.397 --rc geninfo_unexecuted_blocks=1 00:07:22.397 00:07:22.397 ' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.397 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.398 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.398 ************************************ 00:07:22.398 START TEST nvmf_host_management 00:07:22.398 ************************************ 00:07:22.398 09:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.398 * Looking for test storage... 00:07:22.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:22.398 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.398 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.398 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.658 --rc genhtml_branch_coverage=1 00:07:22.658 --rc genhtml_function_coverage=1 00:07:22.658 --rc genhtml_legend=1 00:07:22.658 --rc geninfo_all_blocks=1 00:07:22.658 --rc geninfo_unexecuted_blocks=1 00:07:22.658 00:07:22.658 ' 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.658 --rc genhtml_branch_coverage=1 00:07:22.658 --rc genhtml_function_coverage=1 00:07:22.658 --rc genhtml_legend=1 00:07:22.658 --rc geninfo_all_blocks=1 00:07:22.658 --rc geninfo_unexecuted_blocks=1 00:07:22.658 00:07:22.658 ' 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.658 --rc genhtml_branch_coverage=1 00:07:22.658 --rc genhtml_function_coverage=1 00:07:22.658 --rc genhtml_legend=1 00:07:22.658 --rc geninfo_all_blocks=1 00:07:22.658 --rc geninfo_unexecuted_blocks=1 00:07:22.658 00:07:22.658 ' 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.658 --rc genhtml_branch_coverage=1 00:07:22.658 --rc genhtml_function_coverage=1 00:07:22.658 --rc genhtml_legend=1 00:07:22.658 --rc geninfo_all_blocks=1 00:07:22.658 --rc geninfo_unexecuted_blocks=1 00:07:22.658 00:07:22.658 ' 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.658 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.659 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:22.659 Cannot find device "nvmf_init_br" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:22.659 Cannot find device "nvmf_init_br2" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:22.659 Cannot find device "nvmf_tgt_br" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:22.659 Cannot find device "nvmf_tgt_br2" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:22.659 Cannot find device "nvmf_init_br" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:22.659 Cannot find device "nvmf_init_br2" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:22.659 Cannot find device "nvmf_tgt_br" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:22.659 Cannot find device "nvmf_tgt_br2" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:22.659 Cannot find device "nvmf_br" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:22.659 Cannot find device "nvmf_init_if" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:22.659 Cannot find device "nvmf_init_if2" 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:22.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:22.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:22.659 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:22.918 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:22.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:22.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:07:22.919 00:07:22.919 --- 10.0.0.3 ping statistics --- 00:07:22.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.919 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:22.919 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:22.919 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:07:22.919 00:07:22.919 --- 10.0.0.4 ping statistics --- 00:07:22.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.919 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:22.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:22.919 00:07:22.919 --- 10.0.0.1 ping statistics --- 00:07:22.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.919 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:22.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:07:22.919 00:07:22.919 --- 10.0.0.2 ping statistics --- 00:07:22.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.919 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:22.919 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=62748 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 62748 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62748 ']' 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.178 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.178 [2024-10-08 09:13:14.688468] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:23.178 [2024-10-08 09:13:14.688592] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.178 [2024-10-08 09:13:14.831357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.436 [2024-10-08 09:13:14.936787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.436 [2024-10-08 09:13:14.936841] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.436 [2024-10-08 09:13:14.936856] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.436 [2024-10-08 09:13:14.936866] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.436 [2024-10-08 09:13:14.936876] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.436 [2024-10-08 09:13:14.938197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.436 [2024-10-08 09:13:14.938362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.436 [2024-10-08 09:13:14.938995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.436 [2024-10-08 09:13:14.939002] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.436 [2024-10-08 09:13:15.000482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 [2024-10-08 09:13:15.769443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 Malloc0 00:07:24.390 [2024-10-08 09:13:15.838990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62808 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62808 /var/tmp/bdevperf.sock 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62808 ']' 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:24.390 { 00:07:24.390 "params": { 00:07:24.390 "name": "Nvme$subsystem", 00:07:24.390 "trtype": "$TEST_TRANSPORT", 00:07:24.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.390 "adrfam": "ipv4", 00:07:24.390 "trsvcid": "$NVMF_PORT", 00:07:24.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.390 "hdgst": ${hdgst:-false}, 00:07:24.390 "ddgst": ${ddgst:-false} 00:07:24.390 }, 00:07:24.390 "method": "bdev_nvme_attach_controller" 00:07:24.390 } 00:07:24.390 EOF 00:07:24.390 )") 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:24.390 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:24.390 "params": { 00:07:24.390 "name": "Nvme0", 00:07:24.390 "trtype": "tcp", 00:07:24.390 "traddr": "10.0.0.3", 00:07:24.390 "adrfam": "ipv4", 00:07:24.390 "trsvcid": "4420", 00:07:24.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:24.390 "hdgst": false, 00:07:24.390 "ddgst": false 00:07:24.390 }, 00:07:24.390 "method": "bdev_nvme_attach_controller" 00:07:24.390 }' 00:07:24.391 [2024-10-08 09:13:15.937772] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:24.391 [2024-10-08 09:13:15.937850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62808 ] 00:07:24.649 [2024-10-08 09:13:16.073040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.649 [2024-10-08 09:13:16.174442] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.649 [2024-10-08 09:13:16.242189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.907 Running I/O for 10 seconds... 00:07:25.475 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.475 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:25.475 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:25.475 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.475 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.475 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.475 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.475 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.476 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:25.476 [2024-10-08 09:13:17.085283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.085982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.085992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.086003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.086012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.086023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.086038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.086050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.086059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.086071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.086080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.086092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.086101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.476 [2024-10-08 09:13:17.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.476 [2024-10-08 09:13:17.086122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.477 [2024-10-08 09:13:17.086856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.086867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135c6b0 is same with the state(6) to be set 00:07:25.477 [2024-10-08 09:13:17.086943] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x135c6b0 was disconnected and freed. reset controller. 00:07:25.477 [2024-10-08 09:13:17.087057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.477 [2024-10-08 09:13:17.087074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.087086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.477 [2024-10-08 09:13:17.087095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.087106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.477 [2024-10-08 09:13:17.087116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.087126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.477 [2024-10-08 09:13:17.087135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.477 [2024-10-08 09:13:17.087144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135cb20 is same with the state(6) to be set 00:07:25.477 [2024-10-08 09:13:17.088219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:25.477 task offset: 8192 on job bdev=Nvme0n1 fails 00:07:25.477 00:07:25.477 Latency(us) 00:07:25.477 [2024-10-08T09:13:17.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.477 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.477 Job: Nvme0n1 ended in about 0.72 seconds with error 00:07:25.477 Verification LBA range: start 0x0 length 0x400 00:07:25.477 Nvme0n1 : 0.72 1515.66 94.73 89.16 0.00 38928.96 2308.65 37891.72 00:07:25.477 [2024-10-08T09:13:17.160Z] =================================================================================================================== 00:07:25.477 [2024-10-08T09:13:17.160Z] Total : 1515.66 94.73 89.16 0.00 38928.96 2308.65 37891.72 00:07:25.477 [2024-10-08 09:13:17.090231] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.478 [2024-10-08 09:13:17.090260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135cb20 (9): Bad file descriptor 00:07:25.478 [2024-10-08 09:13:17.097476] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62808 00:07:26.412 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62808) - No such process 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:26.412 { 00:07:26.412 "params": { 00:07:26.412 "name": "Nvme$subsystem", 00:07:26.412 "trtype": "$TEST_TRANSPORT", 00:07:26.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.412 "adrfam": "ipv4", 00:07:26.412 "trsvcid": "$NVMF_PORT", 00:07:26.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.412 "hdgst": ${hdgst:-false}, 00:07:26.412 "ddgst": ${ddgst:-false} 00:07:26.412 }, 00:07:26.412 "method": "bdev_nvme_attach_controller" 00:07:26.412 } 00:07:26.412 EOF 00:07:26.412 )") 00:07:26.412 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:26.671 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:26.671 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:26.671 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:26.671 "params": { 00:07:26.671 "name": "Nvme0", 00:07:26.671 "trtype": "tcp", 00:07:26.671 "traddr": "10.0.0.3", 00:07:26.671 "adrfam": "ipv4", 00:07:26.671 "trsvcid": "4420", 00:07:26.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.671 "hdgst": false, 00:07:26.671 "ddgst": false 00:07:26.671 }, 00:07:26.671 "method": "bdev_nvme_attach_controller" 00:07:26.671 }' 00:07:26.671 [2024-10-08 09:13:18.142668] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:26.671 [2024-10-08 09:13:18.142791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62846 ] 00:07:26.671 [2024-10-08 09:13:18.284216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.929 [2024-10-08 09:13:18.394965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.929 [2024-10-08 09:13:18.456993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.929 Running I/O for 1 seconds... 00:07:28.305 1536.00 IOPS, 96.00 MiB/s 00:07:28.305 Latency(us) 00:07:28.305 [2024-10-08T09:13:19.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.305 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:28.305 Verification LBA range: start 0x0 length 0x400 00:07:28.305 Nvme0n1 : 1.01 1590.66 99.42 0.00 0.00 39460.93 4110.89 36223.53 00:07:28.305 [2024-10-08T09:13:19.988Z] =================================================================================================================== 00:07:28.305 [2024-10-08T09:13:19.988Z] Total : 1590.66 99.42 0.00 0.00 39460.93 4110.89 36223.53 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.305 rmmod nvme_tcp 00:07:28.305 rmmod nvme_fabrics 00:07:28.305 rmmod nvme_keyring 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 62748 ']' 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 62748 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62748 ']' 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62748 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62748 00:07:28.305 killing process with pid 62748 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62748' 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62748 00:07:28.305 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62748 00:07:28.564 [2024-10-08 09:13:20.164691] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:28.564 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:28.823 00:07:28.823 real 0m6.497s 00:07:28.823 user 0m23.909s 00:07:28.823 sys 0m1.675s 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.823 ************************************ 00:07:28.823 END TEST nvmf_host_management 00:07:28.823 ************************************ 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.823 ************************************ 00:07:28.823 START TEST nvmf_lvol 00:07:28.823 ************************************ 00:07:28.823 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:29.084 * Looking for test storage... 00:07:29.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.084 --rc genhtml_branch_coverage=1 00:07:29.084 --rc genhtml_function_coverage=1 00:07:29.084 --rc genhtml_legend=1 00:07:29.084 --rc geninfo_all_blocks=1 00:07:29.084 --rc geninfo_unexecuted_blocks=1 00:07:29.084 00:07:29.084 ' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.084 --rc genhtml_branch_coverage=1 00:07:29.084 --rc genhtml_function_coverage=1 00:07:29.084 --rc genhtml_legend=1 00:07:29.084 --rc geninfo_all_blocks=1 00:07:29.084 --rc geninfo_unexecuted_blocks=1 00:07:29.084 00:07:29.084 ' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.084 --rc genhtml_branch_coverage=1 00:07:29.084 --rc genhtml_function_coverage=1 00:07:29.084 --rc genhtml_legend=1 00:07:29.084 --rc geninfo_all_blocks=1 00:07:29.084 --rc geninfo_unexecuted_blocks=1 00:07:29.084 00:07:29.084 ' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.084 --rc genhtml_branch_coverage=1 00:07:29.084 --rc genhtml_function_coverage=1 00:07:29.084 --rc genhtml_legend=1 00:07:29.084 --rc geninfo_all_blocks=1 00:07:29.084 --rc geninfo_unexecuted_blocks=1 00:07:29.084 00:07:29.084 ' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.084 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.085 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:29.085 Cannot find device "nvmf_init_br" 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:29.085 Cannot find device "nvmf_init_br2" 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:29.085 Cannot find device "nvmf_tgt_br" 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.085 Cannot find device "nvmf_tgt_br2" 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:29.085 Cannot find device "nvmf_init_br" 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:29.085 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:29.344 Cannot find device "nvmf_init_br2" 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:29.344 Cannot find device "nvmf_tgt_br" 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:29.344 Cannot find device "nvmf_tgt_br2" 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:29.344 Cannot find device "nvmf_br" 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:29.344 Cannot find device "nvmf_init_if" 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:29.344 Cannot find device "nvmf_init_if2" 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:29.344 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.345 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:29.345 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:29.345 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.345 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:29.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:07:29.604 00:07:29.604 --- 10.0.0.3 ping statistics --- 00:07:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.604 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:29.604 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:29.604 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:07:29.604 00:07:29.604 --- 10.0.0.4 ping statistics --- 00:07:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.604 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:29.604 00:07:29.604 --- 10.0.0.1 ping statistics --- 00:07:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.604 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:29.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:07:29.604 00:07:29.604 --- 10.0.0.2 ping statistics --- 00:07:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.604 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=63116 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 63116 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 63116 ']' 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.604 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.604 [2024-10-08 09:13:21.179795] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:29.604 [2024-10-08 09:13:21.179889] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.863 [2024-10-08 09:13:21.319457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.863 [2024-10-08 09:13:21.426390] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.863 [2024-10-08 09:13:21.426457] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.863 [2024-10-08 09:13:21.426472] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.863 [2024-10-08 09:13:21.426482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.863 [2024-10-08 09:13:21.426491] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.863 [2024-10-08 09:13:21.427178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.863 [2024-10-08 09:13:21.427271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.863 [2024-10-08 09:13:21.427279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.863 [2024-10-08 09:13:21.487324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.810 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.810 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:30.810 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:30.810 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.810 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.810 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.810 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.081 [2024-10-08 09:13:22.509914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.081 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.339 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:31.339 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.597 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:31.597 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:31.856 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:32.114 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f39acced-831f-4ef2-88ca-09df293c84e2 00:07:32.114 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f39acced-831f-4ef2-88ca-09df293c84e2 lvol 20 00:07:32.372 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=20a8470b-cacd-4c17-ae5b-7e62af1c3749 00:07:32.372 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.630 09:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 20a8470b-cacd-4c17-ae5b-7e62af1c3749 00:07:32.888 09:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:33.145 [2024-10-08 09:13:24.737227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:33.145 09:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:33.402 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63197 00:07:33.402 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:33.402 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:34.775 09:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 20a8470b-cacd-4c17-ae5b-7e62af1c3749 MY_SNAPSHOT 00:07:34.775 09:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a08d59aa-a462-4290-9c9b-11c4f68af59d 00:07:34.775 09:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 20a8470b-cacd-4c17-ae5b-7e62af1c3749 30 00:07:35.033 09:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a08d59aa-a462-4290-9c9b-11c4f68af59d MY_CLONE 00:07:35.599 09:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=076e16ba-a068-409b-b2c8-84c5be37407c 00:07:35.599 09:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 076e16ba-a068-409b-b2c8-84c5be37407c 00:07:35.857 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63197 00:07:43.964 Initializing NVMe Controllers 00:07:43.964 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:43.964 Controller IO queue size 128, less than required. 00:07:43.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:43.964 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:43.964 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:43.964 Initialization complete. Launching workers. 00:07:43.964 ======================================================== 00:07:43.964 Latency(us) 00:07:43.964 Device Information : IOPS MiB/s Average min max 00:07:43.964 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10118.80 39.53 12650.91 2003.94 56103.27 00:07:43.964 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10250.40 40.04 12489.05 3614.19 57333.65 00:07:43.964 ======================================================== 00:07:43.964 Total : 20369.19 79.57 12569.46 2003.94 57333.65 00:07:43.964 00:07:43.964 09:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.222 09:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 20a8470b-cacd-4c17-ae5b-7e62af1c3749 00:07:44.480 09:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f39acced-831f-4ef2-88ca-09df293c84e2 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.738 rmmod nvme_tcp 00:07:44.738 rmmod nvme_fabrics 00:07:44.738 rmmod nvme_keyring 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 63116 ']' 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 63116 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 63116 ']' 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 63116 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.738 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63116 00:07:44.996 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.996 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.996 killing process with pid 63116 00:07:44.996 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63116' 00:07:44.996 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 63116 00:07:44.996 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 63116 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:45.255 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.514 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.514 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:45.514 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.514 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.514 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.514 ************************************ 00:07:45.514 END TEST nvmf_lvol 00:07:45.514 ************************************ 00:07:45.514 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:45.514 00:07:45.514 real 0m16.515s 00:07:45.514 user 1m7.174s 00:07:45.514 sys 0m4.264s 00:07:45.514 09:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.514 ************************************ 00:07:45.514 START TEST nvmf_lvs_grow 00:07:45.514 ************************************ 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.514 * Looking for test storage... 00:07:45.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:45.514 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:45.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.774 --rc genhtml_branch_coverage=1 00:07:45.774 --rc genhtml_function_coverage=1 00:07:45.774 --rc genhtml_legend=1 00:07:45.774 --rc geninfo_all_blocks=1 00:07:45.774 --rc geninfo_unexecuted_blocks=1 00:07:45.774 00:07:45.774 ' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:45.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.774 --rc genhtml_branch_coverage=1 00:07:45.774 --rc genhtml_function_coverage=1 00:07:45.774 --rc genhtml_legend=1 00:07:45.774 --rc geninfo_all_blocks=1 00:07:45.774 --rc geninfo_unexecuted_blocks=1 00:07:45.774 00:07:45.774 ' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:45.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.774 --rc genhtml_branch_coverage=1 00:07:45.774 --rc genhtml_function_coverage=1 00:07:45.774 --rc genhtml_legend=1 00:07:45.774 --rc geninfo_all_blocks=1 00:07:45.774 --rc geninfo_unexecuted_blocks=1 00:07:45.774 00:07:45.774 ' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:45.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.774 --rc genhtml_branch_coverage=1 00:07:45.774 --rc genhtml_function_coverage=1 00:07:45.774 --rc genhtml_legend=1 00:07:45.774 --rc geninfo_all_blocks=1 00:07:45.774 --rc geninfo_unexecuted_blocks=1 00:07:45.774 00:07:45.774 ' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.774 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.775 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:45.775 Cannot find device "nvmf_init_br" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:45.775 Cannot find device "nvmf_init_br2" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:45.775 Cannot find device "nvmf_tgt_br" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.775 Cannot find device "nvmf_tgt_br2" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:45.775 Cannot find device "nvmf_init_br" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:45.775 Cannot find device "nvmf_init_br2" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:45.775 Cannot find device "nvmf_tgt_br" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:45.775 Cannot find device "nvmf_tgt_br2" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:45.775 Cannot find device "nvmf_br" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:45.775 Cannot find device "nvmf_init_if" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:45.775 Cannot find device "nvmf_init_if2" 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:45.775 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:46.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:07:46.035 00:07:46.035 --- 10.0.0.3 ping statistics --- 00:07:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.035 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:46.035 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:46.035 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:07:46.035 00:07:46.035 --- 10.0.0.4 ping statistics --- 00:07:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.035 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:46.035 00:07:46.035 --- 10.0.0.1 ping statistics --- 00:07:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.035 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:46.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:07:46.035 00:07:46.035 --- 10.0.0.2 ping statistics --- 00:07:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.035 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=63574 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 63574 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63574 ']' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.035 09:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.035 [2024-10-08 09:13:37.709550] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:46.035 [2024-10-08 09:13:37.709632] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.294 [2024-10-08 09:13:37.849222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.294 [2024-10-08 09:13:37.961261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.294 [2024-10-08 09:13:37.961325] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.295 [2024-10-08 09:13:37.961340] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.295 [2024-10-08 09:13:37.961351] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.295 [2024-10-08 09:13:37.961361] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.295 [2024-10-08 09:13:37.961833] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.553 [2024-10-08 09:13:38.019213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.127 09:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.127 09:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:47.127 09:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:47.127 09:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.127 09:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.386 09:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.386 09:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.644 [2024-10-08 09:13:39.108712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.644 ************************************ 00:07:47.644 START TEST lvs_grow_clean 00:07:47.644 ************************************ 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:47.644 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:47.645 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.645 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.645 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.903 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:47.903 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:48.162 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=97f4fb94-5a56-4924-af02-ae232617d925 00:07:48.162 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:07:48.162 09:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:48.423 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:48.423 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:48.423 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 97f4fb94-5a56-4924-af02-ae232617d925 lvol 150 00:07:48.687 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=78e0c998-0c43-43b1-a70f-5b1dd6996a2d 00:07:48.687 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.687 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:48.946 [2024-10-08 09:13:40.510607] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:48.946 [2024-10-08 09:13:40.510705] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:48.946 true 00:07:48.946 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:07:48.946 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:49.205 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:49.205 09:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.463 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 78e0c998-0c43-43b1-a70f-5b1dd6996a2d 00:07:49.722 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:49.980 [2024-10-08 09:13:41.555190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:49.980 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63662 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63662 /var/tmp/bdevperf.sock 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63662 ']' 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.239 09:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.239 [2024-10-08 09:13:41.852765] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:50.239 [2024-10-08 09:13:41.852855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63662 ] 00:07:50.498 [2024-10-08 09:13:41.989725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.498 [2024-10-08 09:13:42.093794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.498 [2024-10-08 09:13:42.152116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.435 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.435 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:51.435 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:51.694 Nvme0n1 00:07:51.694 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:51.952 [ 00:07:51.952 { 00:07:51.952 "name": "Nvme0n1", 00:07:51.952 "aliases": [ 00:07:51.952 "78e0c998-0c43-43b1-a70f-5b1dd6996a2d" 00:07:51.952 ], 00:07:51.952 "product_name": "NVMe disk", 00:07:51.952 "block_size": 4096, 00:07:51.952 "num_blocks": 38912, 00:07:51.952 "uuid": "78e0c998-0c43-43b1-a70f-5b1dd6996a2d", 00:07:51.952 "numa_id": -1, 00:07:51.952 "assigned_rate_limits": { 00:07:51.952 "rw_ios_per_sec": 0, 00:07:51.952 "rw_mbytes_per_sec": 0, 00:07:51.952 "r_mbytes_per_sec": 0, 00:07:51.952 "w_mbytes_per_sec": 0 00:07:51.952 }, 00:07:51.952 "claimed": false, 00:07:51.952 "zoned": false, 00:07:51.952 "supported_io_types": { 00:07:51.952 "read": true, 00:07:51.952 "write": true, 00:07:51.953 "unmap": true, 00:07:51.953 "flush": true, 00:07:51.953 "reset": true, 00:07:51.953 "nvme_admin": true, 00:07:51.953 "nvme_io": true, 00:07:51.953 "nvme_io_md": false, 00:07:51.953 "write_zeroes": true, 00:07:51.953 "zcopy": false, 00:07:51.953 "get_zone_info": false, 00:07:51.953 "zone_management": false, 00:07:51.953 "zone_append": false, 00:07:51.953 "compare": true, 00:07:51.953 "compare_and_write": true, 00:07:51.953 "abort": true, 00:07:51.953 "seek_hole": false, 00:07:51.953 "seek_data": false, 00:07:51.953 "copy": true, 00:07:51.953 "nvme_iov_md": false 00:07:51.953 }, 00:07:51.953 "memory_domains": [ 00:07:51.953 { 00:07:51.953 "dma_device_id": "system", 00:07:51.953 "dma_device_type": 1 00:07:51.953 } 00:07:51.953 ], 00:07:51.953 "driver_specific": { 00:07:51.953 "nvme": [ 00:07:51.953 { 00:07:51.953 "trid": { 00:07:51.953 "trtype": "TCP", 00:07:51.953 "adrfam": "IPv4", 00:07:51.953 "traddr": "10.0.0.3", 00:07:51.953 "trsvcid": "4420", 00:07:51.953 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:51.953 }, 00:07:51.953 "ctrlr_data": { 00:07:51.953 "cntlid": 1, 00:07:51.953 "vendor_id": "0x8086", 00:07:51.953 "model_number": "SPDK bdev Controller", 00:07:51.953 "serial_number": "SPDK0", 00:07:51.953 "firmware_revision": "25.01", 00:07:51.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.953 "oacs": { 00:07:51.953 "security": 0, 00:07:51.953 "format": 0, 00:07:51.953 "firmware": 0, 00:07:51.953 "ns_manage": 0 00:07:51.953 }, 00:07:51.953 "multi_ctrlr": true, 00:07:51.953 "ana_reporting": false 00:07:51.953 }, 00:07:51.953 "vs": { 00:07:51.953 "nvme_version": "1.3" 00:07:51.953 }, 00:07:51.953 "ns_data": { 00:07:51.953 "id": 1, 00:07:51.953 "can_share": true 00:07:51.953 } 00:07:51.953 } 00:07:51.953 ], 00:07:51.953 "mp_policy": "active_passive" 00:07:51.953 } 00:07:51.953 } 00:07:51.953 ] 00:07:51.953 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63686 00:07:51.953 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:51.953 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:51.953 Running I/O for 10 seconds... 00:07:53.366 Latency(us) 00:07:53.366 [2024-10-08T09:13:45.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.366 Nvme0n1 : 1.00 6792.00 26.53 0.00 0.00 0.00 0.00 0.00 00:07:53.366 [2024-10-08T09:13:45.049Z] =================================================================================================================== 00:07:53.366 [2024-10-08T09:13:45.049Z] Total : 6792.00 26.53 0.00 0.00 0.00 0.00 0.00 00:07:53.366 00:07:53.933 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 97f4fb94-5a56-4924-af02-ae232617d925 00:07:53.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.933 Nvme0n1 : 2.00 6825.00 26.66 0.00 0.00 0.00 0.00 0.00 00:07:53.933 [2024-10-08T09:13:45.616Z] =================================================================================================================== 00:07:53.933 [2024-10-08T09:13:45.616Z] Total : 6825.00 26.66 0.00 0.00 0.00 0.00 0.00 00:07:53.933 00:07:54.192 true 00:07:54.192 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:54.192 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:07:54.760 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:54.760 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:54.760 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63686 00:07:55.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.018 Nvme0n1 : 3.00 6709.00 26.21 0.00 0.00 0.00 0.00 0.00 00:07:55.018 [2024-10-08T09:13:46.701Z] =================================================================================================================== 00:07:55.018 [2024-10-08T09:13:46.701Z] Total : 6709.00 26.21 0.00 0.00 0.00 0.00 0.00 00:07:55.018 00:07:55.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.957 Nvme0n1 : 4.00 6555.75 25.61 0.00 0.00 0.00 0.00 0.00 00:07:55.957 [2024-10-08T09:13:47.640Z] =================================================================================================================== 00:07:55.957 [2024-10-08T09:13:47.640Z] Total : 6555.75 25.61 0.00 0.00 0.00 0.00 0.00 00:07:55.957 00:07:57.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.335 Nvme0n1 : 5.00 6463.80 25.25 0.00 0.00 0.00 0.00 0.00 00:07:57.335 [2024-10-08T09:13:49.018Z] =================================================================================================================== 00:07:57.335 [2024-10-08T09:13:49.018Z] Total : 6463.80 25.25 0.00 0.00 0.00 0.00 0.00 00:07:57.335 00:07:58.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.275 Nvme0n1 : 6.00 6309.83 24.65 0.00 0.00 0.00 0.00 0.00 00:07:58.275 [2024-10-08T09:13:49.958Z] =================================================================================================================== 00:07:58.275 [2024-10-08T09:13:49.958Z] Total : 6309.83 24.65 0.00 0.00 0.00 0.00 0.00 00:07:58.275 00:07:59.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.210 Nvme0n1 : 7.00 6315.57 24.67 0.00 0.00 0.00 0.00 0.00 00:07:59.210 [2024-10-08T09:13:50.893Z] =================================================================================================================== 00:07:59.210 [2024-10-08T09:13:50.893Z] Total : 6315.57 24.67 0.00 0.00 0.00 0.00 0.00 00:07:59.210 00:08:00.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.146 Nvme0n1 : 8.00 6367.50 24.87 0.00 0.00 0.00 0.00 0.00 00:08:00.146 [2024-10-08T09:13:51.829Z] =================================================================================================================== 00:08:00.146 [2024-10-08T09:13:51.829Z] Total : 6367.50 24.87 0.00 0.00 0.00 0.00 0.00 00:08:00.146 00:08:01.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.081 Nvme0n1 : 9.00 6393.78 24.98 0.00 0.00 0.00 0.00 0.00 00:08:01.081 [2024-10-08T09:13:52.764Z] =================================================================================================================== 00:08:01.081 [2024-10-08T09:13:52.764Z] Total : 6393.78 24.98 0.00 0.00 0.00 0.00 0.00 00:08:01.081 00:08:02.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.034 Nvme0n1 : 10.00 6414.80 25.06 0.00 0.00 0.00 0.00 0.00 00:08:02.034 [2024-10-08T09:13:53.717Z] =================================================================================================================== 00:08:02.034 [2024-10-08T09:13:53.717Z] Total : 6414.80 25.06 0.00 0.00 0.00 0.00 0.00 00:08:02.034 00:08:02.034 00:08:02.034 Latency(us) 00:08:02.034 [2024-10-08T09:13:53.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.034 Nvme0n1 : 10.01 6421.24 25.08 0.00 0.00 19929.08 5659.93 111053.73 00:08:02.034 [2024-10-08T09:13:53.717Z] =================================================================================================================== 00:08:02.034 [2024-10-08T09:13:53.717Z] Total : 6421.24 25.08 0.00 0.00 19929.08 5659.93 111053.73 00:08:02.034 { 00:08:02.034 "results": [ 00:08:02.034 { 00:08:02.034 "job": "Nvme0n1", 00:08:02.034 "core_mask": "0x2", 00:08:02.034 "workload": "randwrite", 00:08:02.034 "status": "finished", 00:08:02.034 "queue_depth": 128, 00:08:02.034 "io_size": 4096, 00:08:02.034 "runtime": 10.009902, 00:08:02.034 "iops": 6421.241686482045, 00:08:02.034 "mibps": 25.08297533782049, 00:08:02.034 "io_failed": 0, 00:08:02.034 "io_timeout": 0, 00:08:02.034 "avg_latency_us": 19929.08117561199, 00:08:02.034 "min_latency_us": 5659.927272727273, 00:08:02.034 "max_latency_us": 111053.73090909091 00:08:02.034 } 00:08:02.034 ], 00:08:02.034 "core_count": 1 00:08:02.034 } 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63662 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63662 ']' 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63662 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63662 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:02.034 killing process with pid 63662 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63662' 00:08:02.034 Received shutdown signal, test time was about 10.000000 seconds 00:08:02.034 00:08:02.034 Latency(us) 00:08:02.034 [2024-10-08T09:13:53.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.034 [2024-10-08T09:13:53.717Z] =================================================================================================================== 00:08:02.034 [2024-10-08T09:13:53.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63662 00:08:02.034 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63662 00:08:02.303 09:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:02.561 09:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:02.819 09:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:08:02.819 09:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:03.386 09:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:03.386 09:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:03.386 09:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.386 [2024-10-08 09:13:55.052544] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:03.645 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:08:03.904 request: 00:08:03.904 { 00:08:03.904 "uuid": "97f4fb94-5a56-4924-af02-ae232617d925", 00:08:03.904 "method": "bdev_lvol_get_lvstores", 00:08:03.904 "req_id": 1 00:08:03.904 } 00:08:03.904 Got JSON-RPC error response 00:08:03.904 response: 00:08:03.904 { 00:08:03.904 "code": -19, 00:08:03.904 "message": "No such device" 00:08:03.904 } 00:08:03.904 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:03.904 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.904 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:03.904 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.904 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.162 aio_bdev 00:08:04.162 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 78e0c998-0c43-43b1-a70f-5b1dd6996a2d 00:08:04.162 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=78e0c998-0c43-43b1-a70f-5b1dd6996a2d 00:08:04.162 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.162 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:04.162 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.162 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.162 09:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.421 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78e0c998-0c43-43b1-a70f-5b1dd6996a2d -t 2000 00:08:04.679 [ 00:08:04.679 { 00:08:04.679 "name": "78e0c998-0c43-43b1-a70f-5b1dd6996a2d", 00:08:04.679 "aliases": [ 00:08:04.679 "lvs/lvol" 00:08:04.679 ], 00:08:04.679 "product_name": "Logical Volume", 00:08:04.679 "block_size": 4096, 00:08:04.679 "num_blocks": 38912, 00:08:04.679 "uuid": "78e0c998-0c43-43b1-a70f-5b1dd6996a2d", 00:08:04.679 "assigned_rate_limits": { 00:08:04.679 "rw_ios_per_sec": 0, 00:08:04.679 "rw_mbytes_per_sec": 0, 00:08:04.679 "r_mbytes_per_sec": 0, 00:08:04.679 "w_mbytes_per_sec": 0 00:08:04.679 }, 00:08:04.679 "claimed": false, 00:08:04.679 "zoned": false, 00:08:04.679 "supported_io_types": { 00:08:04.679 "read": true, 00:08:04.679 "write": true, 00:08:04.679 "unmap": true, 00:08:04.679 "flush": false, 00:08:04.679 "reset": true, 00:08:04.679 "nvme_admin": false, 00:08:04.679 "nvme_io": false, 00:08:04.679 "nvme_io_md": false, 00:08:04.679 "write_zeroes": true, 00:08:04.679 "zcopy": false, 00:08:04.679 "get_zone_info": false, 00:08:04.679 "zone_management": false, 00:08:04.679 "zone_append": false, 00:08:04.679 "compare": false, 00:08:04.679 "compare_and_write": false, 00:08:04.679 "abort": false, 00:08:04.679 "seek_hole": true, 00:08:04.679 "seek_data": true, 00:08:04.679 "copy": false, 00:08:04.679 "nvme_iov_md": false 00:08:04.679 }, 00:08:04.679 "driver_specific": { 00:08:04.679 "lvol": { 00:08:04.679 "lvol_store_uuid": "97f4fb94-5a56-4924-af02-ae232617d925", 00:08:04.679 "base_bdev": "aio_bdev", 00:08:04.679 "thin_provision": false, 00:08:04.679 "num_allocated_clusters": 38, 00:08:04.679 "snapshot": false, 00:08:04.679 "clone": false, 00:08:04.679 "esnap_clone": false 00:08:04.679 } 00:08:04.679 } 00:08:04.679 } 00:08:04.679 ] 00:08:04.679 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:04.679 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:08:04.679 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:04.938 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:04.938 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97f4fb94-5a56-4924-af02-ae232617d925 00:08:04.938 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:05.505 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:05.505 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 78e0c998-0c43-43b1-a70f-5b1dd6996a2d 00:08:05.763 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97f4fb94-5a56-4924-af02-ae232617d925 00:08:06.028 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:06.286 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:06.853 00:08:06.853 real 0m19.096s 00:08:06.853 user 0m18.151s 00:08:06.853 sys 0m2.590s 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:06.853 ************************************ 00:08:06.853 END TEST lvs_grow_clean 00:08:06.853 ************************************ 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.853 ************************************ 00:08:06.853 START TEST lvs_grow_dirty 00:08:06.853 ************************************ 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:06.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.112 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.112 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:07.370 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:07.370 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:07.370 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:07.629 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:07.629 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:07.629 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef952984-4d46-4760-9c3e-ef074a2d711b lvol 150 00:08:07.888 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=745a6d3a-a52e-4a49-bf29-2e5de47e5587 00:08:07.888 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.888 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.146 [2024-10-08 09:13:59.628673] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.146 [2024-10-08 09:13:59.628800] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.146 true 00:08:08.146 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:08.146 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.405 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.405 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.664 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 745a6d3a-a52e-4a49-bf29-2e5de47e5587 00:08:08.923 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:09.181 [2024-10-08 09:14:00.637367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:09.181 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63943 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63943 /var/tmp/bdevperf.sock 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63943 ']' 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.441 09:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.441 [2024-10-08 09:14:00.979390] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:09.441 [2024-10-08 09:14:00.979483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63943 ] 00:08:09.441 [2024-10-08 09:14:01.116779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.708 [2024-10-08 09:14:01.239716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.708 [2024-10-08 09:14:01.302084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.643 09:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.643 09:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:10.643 09:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.901 Nvme0n1 00:08:10.902 09:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:11.160 [ 00:08:11.160 { 00:08:11.160 "name": "Nvme0n1", 00:08:11.160 "aliases": [ 00:08:11.160 "745a6d3a-a52e-4a49-bf29-2e5de47e5587" 00:08:11.160 ], 00:08:11.160 "product_name": "NVMe disk", 00:08:11.161 "block_size": 4096, 00:08:11.161 "num_blocks": 38912, 00:08:11.161 "uuid": "745a6d3a-a52e-4a49-bf29-2e5de47e5587", 00:08:11.161 "numa_id": -1, 00:08:11.161 "assigned_rate_limits": { 00:08:11.161 "rw_ios_per_sec": 0, 00:08:11.161 "rw_mbytes_per_sec": 0, 00:08:11.161 "r_mbytes_per_sec": 0, 00:08:11.161 "w_mbytes_per_sec": 0 00:08:11.161 }, 00:08:11.161 "claimed": false, 00:08:11.161 "zoned": false, 00:08:11.161 "supported_io_types": { 00:08:11.161 "read": true, 00:08:11.161 "write": true, 00:08:11.161 "unmap": true, 00:08:11.161 "flush": true, 00:08:11.161 "reset": true, 00:08:11.161 "nvme_admin": true, 00:08:11.161 "nvme_io": true, 00:08:11.161 "nvme_io_md": false, 00:08:11.161 "write_zeroes": true, 00:08:11.161 "zcopy": false, 00:08:11.161 "get_zone_info": false, 00:08:11.161 "zone_management": false, 00:08:11.161 "zone_append": false, 00:08:11.161 "compare": true, 00:08:11.161 "compare_and_write": true, 00:08:11.161 "abort": true, 00:08:11.161 "seek_hole": false, 00:08:11.161 "seek_data": false, 00:08:11.161 "copy": true, 00:08:11.161 "nvme_iov_md": false 00:08:11.161 }, 00:08:11.161 "memory_domains": [ 00:08:11.161 { 00:08:11.161 "dma_device_id": "system", 00:08:11.161 "dma_device_type": 1 00:08:11.161 } 00:08:11.161 ], 00:08:11.161 "driver_specific": { 00:08:11.161 "nvme": [ 00:08:11.161 { 00:08:11.161 "trid": { 00:08:11.161 "trtype": "TCP", 00:08:11.161 "adrfam": "IPv4", 00:08:11.161 "traddr": "10.0.0.3", 00:08:11.161 "trsvcid": "4420", 00:08:11.161 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:11.161 }, 00:08:11.161 "ctrlr_data": { 00:08:11.161 "cntlid": 1, 00:08:11.161 "vendor_id": "0x8086", 00:08:11.161 "model_number": "SPDK bdev Controller", 00:08:11.161 "serial_number": "SPDK0", 00:08:11.161 "firmware_revision": "25.01", 00:08:11.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.161 "oacs": { 00:08:11.161 "security": 0, 00:08:11.161 "format": 0, 00:08:11.161 "firmware": 0, 00:08:11.161 "ns_manage": 0 00:08:11.161 }, 00:08:11.161 "multi_ctrlr": true, 00:08:11.161 "ana_reporting": false 00:08:11.161 }, 00:08:11.161 "vs": { 00:08:11.161 "nvme_version": "1.3" 00:08:11.161 }, 00:08:11.161 "ns_data": { 00:08:11.161 "id": 1, 00:08:11.161 "can_share": true 00:08:11.161 } 00:08:11.161 } 00:08:11.161 ], 00:08:11.161 "mp_policy": "active_passive" 00:08:11.161 } 00:08:11.161 } 00:08:11.161 ] 00:08:11.161 09:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.161 09:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63961 00:08:11.161 09:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:11.161 Running I/O for 10 seconds... 00:08:12.096 Latency(us) 00:08:12.096 [2024-10-08T09:14:03.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.096 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:12.096 [2024-10-08T09:14:03.779Z] =================================================================================================================== 00:08:12.096 [2024-10-08T09:14:03.779Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:12.096 00:08:13.032 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:13.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.291 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:13.291 [2024-10-08T09:14:04.974Z] =================================================================================================================== 00:08:13.291 [2024-10-08T09:14:04.974Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:13.291 00:08:13.550 true 00:08:13.550 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:13.550 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.809 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.809 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.809 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63961 00:08:14.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.376 Nvme0n1 : 3.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:08:14.376 [2024-10-08T09:14:06.059Z] =================================================================================================================== 00:08:14.376 [2024-10-08T09:14:06.059Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:08:14.376 00:08:15.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.311 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:15.311 [2024-10-08T09:14:06.994Z] =================================================================================================================== 00:08:15.311 [2024-10-08T09:14:06.994Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:15.311 00:08:16.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.247 Nvme0n1 : 5.00 6429.40 25.11 0.00 0.00 0.00 0.00 0.00 00:08:16.247 [2024-10-08T09:14:07.930Z] =================================================================================================================== 00:08:16.247 [2024-10-08T09:14:07.930Z] Total : 6429.40 25.11 0.00 0.00 0.00 0.00 0.00 00:08:16.247 00:08:17.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.184 Nvme0n1 : 6.00 6373.83 24.90 0.00 0.00 0.00 0.00 0.00 00:08:17.184 [2024-10-08T09:14:08.867Z] =================================================================================================================== 00:08:17.184 [2024-10-08T09:14:08.867Z] Total : 6373.83 24.90 0.00 0.00 0.00 0.00 0.00 00:08:17.184 00:08:18.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.119 Nvme0n1 : 7.00 6316.00 24.67 0.00 0.00 0.00 0.00 0.00 00:08:18.119 [2024-10-08T09:14:09.802Z] =================================================================================================================== 00:08:18.119 [2024-10-08T09:14:09.802Z] Total : 6316.00 24.67 0.00 0.00 0.00 0.00 0.00 00:08:18.119 00:08:19.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.498 Nvme0n1 : 8.00 6304.38 24.63 0.00 0.00 0.00 0.00 0.00 00:08:19.498 [2024-10-08T09:14:11.181Z] =================================================================================================================== 00:08:19.498 [2024-10-08T09:14:11.181Z] Total : 6304.38 24.63 0.00 0.00 0.00 0.00 0.00 00:08:19.498 00:08:20.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.436 Nvme0n1 : 9.00 6267.11 24.48 0.00 0.00 0.00 0.00 0.00 00:08:20.436 [2024-10-08T09:14:12.119Z] =================================================================================================================== 00:08:20.436 [2024-10-08T09:14:12.119Z] Total : 6267.11 24.48 0.00 0.00 0.00 0.00 0.00 00:08:20.436 00:08:21.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.373 Nvme0n1 : 10.00 6250.00 24.41 0.00 0.00 0.00 0.00 0.00 00:08:21.373 [2024-10-08T09:14:13.056Z] =================================================================================================================== 00:08:21.373 [2024-10-08T09:14:13.056Z] Total : 6250.00 24.41 0.00 0.00 0.00 0.00 0.00 00:08:21.373 00:08:21.373 00:08:21.373 Latency(us) 00:08:21.373 [2024-10-08T09:14:13.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.373 Nvme0n1 : 10.01 6254.97 24.43 0.00 0.00 20458.73 3261.91 51952.17 00:08:21.373 [2024-10-08T09:14:13.056Z] =================================================================================================================== 00:08:21.373 [2024-10-08T09:14:13.056Z] Total : 6254.97 24.43 0.00 0.00 20458.73 3261.91 51952.17 00:08:21.373 { 00:08:21.373 "results": [ 00:08:21.373 { 00:08:21.373 "job": "Nvme0n1", 00:08:21.373 "core_mask": "0x2", 00:08:21.373 "workload": "randwrite", 00:08:21.373 "status": "finished", 00:08:21.373 "queue_depth": 128, 00:08:21.373 "io_size": 4096, 00:08:21.373 "runtime": 10.01252, 00:08:21.373 "iops": 6254.968779088581, 00:08:21.373 "mibps": 24.43347179331477, 00:08:21.373 "io_failed": 0, 00:08:21.373 "io_timeout": 0, 00:08:21.373 "avg_latency_us": 20458.72673251, 00:08:21.373 "min_latency_us": 3261.9054545454546, 00:08:21.373 "max_latency_us": 51952.174545454545 00:08:21.373 } 00:08:21.373 ], 00:08:21.373 "core_count": 1 00:08:21.373 } 00:08:21.373 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63943 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63943 ']' 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63943 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63943 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:21.374 killing process with pid 63943 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63943' 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63943 00:08:21.374 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.374 00:08:21.374 Latency(us) 00:08:21.374 [2024-10-08T09:14:13.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.374 [2024-10-08T09:14:13.057Z] =================================================================================================================== 00:08:21.374 [2024-10-08T09:14:13.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.374 09:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63943 00:08:21.633 09:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:21.892 09:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.152 09:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:22.152 09:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63574 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63574 00:08:22.720 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63574 Killed "${NVMF_APP[@]}" "$@" 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=64105 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 64105 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 64105 ']' 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.720 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.721 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.721 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.721 09:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.721 [2024-10-08 09:14:14.208333] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:22.721 [2024-10-08 09:14:14.208443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.721 [2024-10-08 09:14:14.352385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.980 [2024-10-08 09:14:14.463343] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.980 [2024-10-08 09:14:14.463430] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.980 [2024-10-08 09:14:14.463458] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.980 [2024-10-08 09:14:14.463466] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.980 [2024-10-08 09:14:14.463473] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.980 [2024-10-08 09:14:14.463918] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.980 [2024-10-08 09:14:14.523466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.549 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.549 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:23.549 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:23.549 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.549 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.809 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.809 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.068 [2024-10-08 09:14:15.523875] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:24.068 [2024-10-08 09:14:15.524173] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:24.068 [2024-10-08 09:14:15.524881] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 745a6d3a-a52e-4a49-bf29-2e5de47e5587 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=745a6d3a-a52e-4a49-bf29-2e5de47e5587 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.068 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.328 09:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 745a6d3a-a52e-4a49-bf29-2e5de47e5587 -t 2000 00:08:24.587 [ 00:08:24.587 { 00:08:24.587 "name": "745a6d3a-a52e-4a49-bf29-2e5de47e5587", 00:08:24.587 "aliases": [ 00:08:24.587 "lvs/lvol" 00:08:24.587 ], 00:08:24.587 "product_name": "Logical Volume", 00:08:24.587 "block_size": 4096, 00:08:24.587 "num_blocks": 38912, 00:08:24.587 "uuid": "745a6d3a-a52e-4a49-bf29-2e5de47e5587", 00:08:24.587 "assigned_rate_limits": { 00:08:24.587 "rw_ios_per_sec": 0, 00:08:24.587 "rw_mbytes_per_sec": 0, 00:08:24.587 "r_mbytes_per_sec": 0, 00:08:24.587 "w_mbytes_per_sec": 0 00:08:24.587 }, 00:08:24.587 "claimed": false, 00:08:24.587 "zoned": false, 00:08:24.587 "supported_io_types": { 00:08:24.587 "read": true, 00:08:24.587 "write": true, 00:08:24.587 "unmap": true, 00:08:24.587 "flush": false, 00:08:24.587 "reset": true, 00:08:24.587 "nvme_admin": false, 00:08:24.587 "nvme_io": false, 00:08:24.587 "nvme_io_md": false, 00:08:24.587 "write_zeroes": true, 00:08:24.587 "zcopy": false, 00:08:24.587 "get_zone_info": false, 00:08:24.587 "zone_management": false, 00:08:24.587 "zone_append": false, 00:08:24.587 "compare": false, 00:08:24.587 "compare_and_write": false, 00:08:24.587 "abort": false, 00:08:24.587 "seek_hole": true, 00:08:24.587 "seek_data": true, 00:08:24.587 "copy": false, 00:08:24.587 "nvme_iov_md": false 00:08:24.587 }, 00:08:24.587 "driver_specific": { 00:08:24.587 "lvol": { 00:08:24.587 "lvol_store_uuid": "ef952984-4d46-4760-9c3e-ef074a2d711b", 00:08:24.587 "base_bdev": "aio_bdev", 00:08:24.587 "thin_provision": false, 00:08:24.587 "num_allocated_clusters": 38, 00:08:24.587 "snapshot": false, 00:08:24.587 "clone": false, 00:08:24.587 "esnap_clone": false 00:08:24.587 } 00:08:24.587 } 00:08:24.587 } 00:08:24.587 ] 00:08:24.587 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:24.587 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:24.587 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:24.846 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:24.846 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:24.846 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:25.105 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:25.105 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.364 [2024-10-08 09:14:16.945409] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:25.364 09:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:25.625 request: 00:08:25.625 { 00:08:25.625 "uuid": "ef952984-4d46-4760-9c3e-ef074a2d711b", 00:08:25.625 "method": "bdev_lvol_get_lvstores", 00:08:25.625 "req_id": 1 00:08:25.625 } 00:08:25.625 Got JSON-RPC error response 00:08:25.625 response: 00:08:25.625 { 00:08:25.625 "code": -19, 00:08:25.625 "message": "No such device" 00:08:25.625 } 00:08:25.625 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:25.625 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.625 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.625 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.625 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.883 aio_bdev 00:08:25.883 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 745a6d3a-a52e-4a49-bf29-2e5de47e5587 00:08:25.883 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=745a6d3a-a52e-4a49-bf29-2e5de47e5587 00:08:25.883 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.883 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:25.883 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.883 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.883 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:26.142 09:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 745a6d3a-a52e-4a49-bf29-2e5de47e5587 -t 2000 00:08:26.402 [ 00:08:26.402 { 00:08:26.402 "name": "745a6d3a-a52e-4a49-bf29-2e5de47e5587", 00:08:26.402 "aliases": [ 00:08:26.402 "lvs/lvol" 00:08:26.402 ], 00:08:26.402 "product_name": "Logical Volume", 00:08:26.402 "block_size": 4096, 00:08:26.402 "num_blocks": 38912, 00:08:26.402 "uuid": "745a6d3a-a52e-4a49-bf29-2e5de47e5587", 00:08:26.402 "assigned_rate_limits": { 00:08:26.402 "rw_ios_per_sec": 0, 00:08:26.402 "rw_mbytes_per_sec": 0, 00:08:26.402 "r_mbytes_per_sec": 0, 00:08:26.402 "w_mbytes_per_sec": 0 00:08:26.402 }, 00:08:26.402 "claimed": false, 00:08:26.402 "zoned": false, 00:08:26.402 "supported_io_types": { 00:08:26.402 "read": true, 00:08:26.402 "write": true, 00:08:26.402 "unmap": true, 00:08:26.402 "flush": false, 00:08:26.402 "reset": true, 00:08:26.402 "nvme_admin": false, 00:08:26.402 "nvme_io": false, 00:08:26.402 "nvme_io_md": false, 00:08:26.402 "write_zeroes": true, 00:08:26.402 "zcopy": false, 00:08:26.402 "get_zone_info": false, 00:08:26.402 "zone_management": false, 00:08:26.402 "zone_append": false, 00:08:26.402 "compare": false, 00:08:26.402 "compare_and_write": false, 00:08:26.402 "abort": false, 00:08:26.402 "seek_hole": true, 00:08:26.402 "seek_data": true, 00:08:26.402 "copy": false, 00:08:26.402 "nvme_iov_md": false 00:08:26.402 }, 00:08:26.402 "driver_specific": { 00:08:26.402 "lvol": { 00:08:26.402 "lvol_store_uuid": "ef952984-4d46-4760-9c3e-ef074a2d711b", 00:08:26.402 "base_bdev": "aio_bdev", 00:08:26.402 "thin_provision": false, 00:08:26.402 "num_allocated_clusters": 38, 00:08:26.402 "snapshot": false, 00:08:26.402 "clone": false, 00:08:26.402 "esnap_clone": false 00:08:26.402 } 00:08:26.402 } 00:08:26.402 } 00:08:26.402 ] 00:08:26.402 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:26.402 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:26.402 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:26.970 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:26.970 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:26.970 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:27.229 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.229 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 745a6d3a-a52e-4a49-bf29-2e5de47e5587 00:08:27.488 09:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef952984-4d46-4760-9c3e-ef074a2d711b 00:08:27.747 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.006 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.266 00:08:28.266 real 0m21.644s 00:08:28.266 user 0m44.229s 00:08:28.266 sys 0m9.455s 00:08:28.266 ************************************ 00:08:28.266 END TEST lvs_grow_dirty 00:08:28.266 ************************************ 00:08:28.266 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.266 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.525 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:28.525 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:28.525 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:28.525 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:28.525 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:28.525 nvmf_trace.0 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:28.525 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:28.784 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.784 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:28.784 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.784 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.784 rmmod nvme_tcp 00:08:28.784 rmmod nvme_fabrics 00:08:28.784 rmmod nvme_keyring 00:08:28.784 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 64105 ']' 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 64105 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 64105 ']' 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 64105 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64105 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.785 killing process with pid 64105 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64105' 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 64105 00:08:28.785 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 64105 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:29.044 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:29.304 00:08:29.304 real 0m43.898s 00:08:29.304 user 1m9.724s 00:08:29.304 sys 0m12.993s 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.304 ************************************ 00:08:29.304 END TEST nvmf_lvs_grow 00:08:29.304 ************************************ 00:08:29.304 09:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.564 09:14:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.564 09:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.564 09:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.564 09:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.564 ************************************ 00:08:29.564 START TEST nvmf_bdev_io_wait 00:08:29.564 ************************************ 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.564 * Looking for test storage... 00:08:29.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.564 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:29.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.565 --rc genhtml_branch_coverage=1 00:08:29.565 --rc genhtml_function_coverage=1 00:08:29.565 --rc genhtml_legend=1 00:08:29.565 --rc geninfo_all_blocks=1 00:08:29.565 --rc geninfo_unexecuted_blocks=1 00:08:29.565 00:08:29.565 ' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:29.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.565 --rc genhtml_branch_coverage=1 00:08:29.565 --rc genhtml_function_coverage=1 00:08:29.565 --rc genhtml_legend=1 00:08:29.565 --rc geninfo_all_blocks=1 00:08:29.565 --rc geninfo_unexecuted_blocks=1 00:08:29.565 00:08:29.565 ' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:29.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.565 --rc genhtml_branch_coverage=1 00:08:29.565 --rc genhtml_function_coverage=1 00:08:29.565 --rc genhtml_legend=1 00:08:29.565 --rc geninfo_all_blocks=1 00:08:29.565 --rc geninfo_unexecuted_blocks=1 00:08:29.565 00:08:29.565 ' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:29.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.565 --rc genhtml_branch_coverage=1 00:08:29.565 --rc genhtml_function_coverage=1 00:08:29.565 --rc genhtml_legend=1 00:08:29.565 --rc geninfo_all_blocks=1 00:08:29.565 --rc geninfo_unexecuted_blocks=1 00:08:29.565 00:08:29.565 ' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.565 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:29.565 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:29.825 Cannot find device "nvmf_init_br" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:29.825 Cannot find device "nvmf_init_br2" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:29.825 Cannot find device "nvmf_tgt_br" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.825 Cannot find device "nvmf_tgt_br2" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:29.825 Cannot find device "nvmf_init_br" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:29.825 Cannot find device "nvmf_init_br2" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:29.825 Cannot find device "nvmf_tgt_br" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:29.825 Cannot find device "nvmf_tgt_br2" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:29.825 Cannot find device "nvmf_br" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:29.825 Cannot find device "nvmf_init_if" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:29.825 Cannot find device "nvmf_init_if2" 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:29.825 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:30.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:08:30.085 00:08:30.085 --- 10.0.0.3 ping statistics --- 00:08:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.085 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:30.085 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:30.085 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:08:30.085 00:08:30.085 --- 10.0.0.4 ping statistics --- 00:08:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.085 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:08:30.085 00:08:30.085 --- 10.0.0.1 ping statistics --- 00:08:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.085 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:30.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:08:30.085 00:08:30.085 --- 10.0.0.2 ping statistics --- 00:08:30.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.085 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:30.085 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=64483 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 64483 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 64483 ']' 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.086 09:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.086 [2024-10-08 09:14:21.759149] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:30.086 [2024-10-08 09:14:21.759260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.345 [2024-10-08 09:14:21.902202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.604 [2024-10-08 09:14:22.028843] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.604 [2024-10-08 09:14:22.028936] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.605 [2024-10-08 09:14:22.028957] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.605 [2024-10-08 09:14:22.028986] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.605 [2024-10-08 09:14:22.028999] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.605 [2024-10-08 09:14:22.030848] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.605 [2024-10-08 09:14:22.031007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.605 [2024-10-08 09:14:22.031192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.605 [2024-10-08 09:14:22.031210] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 [2024-10-08 09:14:22.984438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.543 09:14:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 [2024-10-08 09:14:23.001329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 Malloc0 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.543 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 [2024-10-08 09:14:23.080189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64518 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64520 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:31.544 { 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme$subsystem", 00:08:31.544 "trtype": "$TEST_TRANSPORT", 00:08:31.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "$NVMF_PORT", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.544 "hdgst": ${hdgst:-false}, 00:08:31.544 "ddgst": ${ddgst:-false} 00:08:31.544 }, 00:08:31.544 "method": "bdev_nvme_attach_controller" 00:08:31.544 } 00:08:31.544 EOF 00:08:31.544 )") 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64522 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:31.544 { 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme$subsystem", 00:08:31.544 "trtype": "$TEST_TRANSPORT", 00:08:31.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "$NVMF_PORT", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.544 "hdgst": ${hdgst:-false}, 00:08:31.544 "ddgst": ${ddgst:-false} 00:08:31.544 }, 00:08:31.544 "method": "bdev_nvme_attach_controller" 00:08:31.544 } 00:08:31.544 EOF 00:08:31.544 )") 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64525 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:31.544 { 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme$subsystem", 00:08:31.544 "trtype": "$TEST_TRANSPORT", 00:08:31.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "$NVMF_PORT", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.544 "hdgst": ${hdgst:-false}, 00:08:31.544 "ddgst": ${ddgst:-false} 00:08:31.544 }, 00:08:31.544 "method": "bdev_nvme_attach_controller" 00:08:31.544 } 00:08:31.544 EOF 00:08:31.544 )") 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:31.544 { 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme$subsystem", 00:08:31.544 "trtype": "$TEST_TRANSPORT", 00:08:31.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "$NVMF_PORT", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.544 "hdgst": ${hdgst:-false}, 00:08:31.544 "ddgst": ${ddgst:-false} 00:08:31.544 }, 00:08:31.544 "method": "bdev_nvme_attach_controller" 00:08:31.544 } 00:08:31.544 EOF 00:08:31.544 )") 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme1", 00:08:31.544 "trtype": "tcp", 00:08:31.544 "traddr": "10.0.0.3", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "4420", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.544 "hdgst": false, 00:08:31.544 "ddgst": false 00:08:31.544 }, 00:08:31.544 "method": "bdev_nvme_attach_controller" 00:08:31.544 }' 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme1", 00:08:31.544 "trtype": "tcp", 00:08:31.544 "traddr": "10.0.0.3", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "4420", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.544 "hdgst": false, 00:08:31.544 "ddgst": false 00:08:31.544 }, 00:08:31.544 "method": "bdev_nvme_attach_controller" 00:08:31.544 }' 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme1", 00:08:31.544 "trtype": "tcp", 00:08:31.544 "traddr": "10.0.0.3", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "4420", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.544 "hdgst": false, 00:08:31.544 "ddgst": false 00:08:31.544 }, 00:08:31.544 "method": "bdev_nvme_attach_controller" 00:08:31.544 }' 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:31.544 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:31.544 "params": { 00:08:31.544 "name": "Nvme1", 00:08:31.544 "trtype": "tcp", 00:08:31.544 "traddr": "10.0.0.3", 00:08:31.544 "adrfam": "ipv4", 00:08:31.544 "trsvcid": "4420", 00:08:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.544 "hdgst": false, 00:08:31.544 "ddgst": false 00:08:31.545 }, 00:08:31.545 "method": "bdev_nvme_attach_controller" 00:08:31.545 }' 00:08:31.545 [2024-10-08 09:14:23.154493] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:31.545 [2024-10-08 09:14:23.154590] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:31.545 09:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64518 00:08:31.545 [2024-10-08 09:14:23.167376] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:31.545 [2024-10-08 09:14:23.167458] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:31.545 [2024-10-08 09:14:23.171498] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:31.545 [2024-10-08 09:14:23.171616] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:31.545 [2024-10-08 09:14:23.173085] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:31.545 [2024-10-08 09:14:23.173308] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:31.804 [2024-10-08 09:14:23.377707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.804 [2024-10-08 09:14:23.464089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.063 [2024-10-08 09:14:23.521190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:08:32.063 [2024-10-08 09:14:23.557108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.064 [2024-10-08 09:14:23.574267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.064 [2024-10-08 09:14:23.583507] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:08:32.064 [2024-10-08 09:14:23.648776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.064 [2024-10-08 09:14:23.661012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.064 [2024-10-08 09:14:23.688745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:08:32.064 Running I/O for 1 seconds... 00:08:32.323 [2024-10-08 09:14:23.758708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.323 [2024-10-08 09:14:23.766551] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.323 Running I/O for 1 seconds... 00:08:32.323 [2024-10-08 09:14:23.858027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.323 Running I/O for 1 seconds... 00:08:32.582 Running I/O for 1 seconds... 00:08:33.151 9355.00 IOPS, 36.54 MiB/s 00:08:33.151 Latency(us) 00:08:33.151 [2024-10-08T09:14:24.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.151 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:33.151 Nvme1n1 : 1.01 9409.16 36.75 0.00 0.00 13543.12 7983.48 18826.71 00:08:33.151 [2024-10-08T09:14:24.834Z] =================================================================================================================== 00:08:33.151 [2024-10-08T09:14:24.834Z] Total : 9409.16 36.75 0.00 0.00 13543.12 7983.48 18826.71 00:08:33.151 165816.00 IOPS, 647.72 MiB/s 00:08:33.151 Latency(us) 00:08:33.151 [2024-10-08T09:14:24.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.151 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:33.151 Nvme1n1 : 1.00 165476.33 646.39 0.00 0.00 769.49 426.36 2040.55 00:08:33.151 [2024-10-08T09:14:24.834Z] =================================================================================================================== 00:08:33.151 [2024-10-08T09:14:24.834Z] Total : 165476.33 646.39 0.00 0.00 769.49 426.36 2040.55 00:08:33.432 5872.00 IOPS, 22.94 MiB/s 00:08:33.433 Latency(us) 00:08:33.433 [2024-10-08T09:14:25.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.433 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:33.433 Nvme1n1 : 1.02 5915.78 23.11 0.00 0.00 21477.03 9770.82 28955.00 00:08:33.433 [2024-10-08T09:14:25.116Z] =================================================================================================================== 00:08:33.433 [2024-10-08T09:14:25.116Z] Total : 5915.78 23.11 0.00 0.00 21477.03 9770.82 28955.00 00:08:33.433 4534.00 IOPS, 17.71 MiB/s 00:08:33.433 Latency(us) 00:08:33.433 [2024-10-08T09:14:25.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.433 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:33.433 Nvme1n1 : 1.01 4617.28 18.04 0.00 0.00 27548.69 2800.17 41943.04 00:08:33.433 [2024-10-08T09:14:25.116Z] =================================================================================================================== 00:08:33.433 [2024-10-08T09:14:25.116Z] Total : 4617.28 18.04 0.00 0.00 27548.69 2800.17 41943.04 00:08:33.714 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64520 00:08:33.714 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64522 00:08:33.714 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64525 00:08:33.714 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.714 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.715 rmmod nvme_tcp 00:08:33.715 rmmod nvme_fabrics 00:08:33.715 rmmod nvme_keyring 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 64483 ']' 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 64483 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 64483 ']' 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 64483 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.715 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64483 00:08:33.974 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.974 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.974 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64483' 00:08:33.974 killing process with pid 64483 00:08:33.974 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 64483 00:08:33.974 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 64483 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:34.232 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:34.233 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:34.233 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:34.233 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:34.233 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:34.233 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.233 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.233 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:34.491 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.491 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.491 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.491 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:34.491 00:08:34.491 real 0m4.939s 00:08:34.491 user 0m19.778s 00:08:34.491 sys 0m2.762s 00:08:34.491 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.491 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.491 ************************************ 00:08:34.491 END TEST nvmf_bdev_io_wait 00:08:34.491 ************************************ 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.491 ************************************ 00:08:34.491 START TEST nvmf_queue_depth 00:08:34.491 ************************************ 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:34.491 * Looking for test storage... 00:08:34.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:34.491 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:34.751 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:34.751 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.751 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.751 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.751 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:34.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.752 --rc genhtml_branch_coverage=1 00:08:34.752 --rc genhtml_function_coverage=1 00:08:34.752 --rc genhtml_legend=1 00:08:34.752 --rc geninfo_all_blocks=1 00:08:34.752 --rc geninfo_unexecuted_blocks=1 00:08:34.752 00:08:34.752 ' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:34.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.752 --rc genhtml_branch_coverage=1 00:08:34.752 --rc genhtml_function_coverage=1 00:08:34.752 --rc genhtml_legend=1 00:08:34.752 --rc geninfo_all_blocks=1 00:08:34.752 --rc geninfo_unexecuted_blocks=1 00:08:34.752 00:08:34.752 ' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:34.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.752 --rc genhtml_branch_coverage=1 00:08:34.752 --rc genhtml_function_coverage=1 00:08:34.752 --rc genhtml_legend=1 00:08:34.752 --rc geninfo_all_blocks=1 00:08:34.752 --rc geninfo_unexecuted_blocks=1 00:08:34.752 00:08:34.752 ' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:34.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.752 --rc genhtml_branch_coverage=1 00:08:34.752 --rc genhtml_function_coverage=1 00:08:34.752 --rc genhtml_legend=1 00:08:34.752 --rc geninfo_all_blocks=1 00:08:34.752 --rc geninfo_unexecuted_blocks=1 00:08:34.752 00:08:34.752 ' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.752 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:34.752 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:34.753 Cannot find device "nvmf_init_br" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:34.753 Cannot find device "nvmf_init_br2" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:34.753 Cannot find device "nvmf_tgt_br" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.753 Cannot find device "nvmf_tgt_br2" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:34.753 Cannot find device "nvmf_init_br" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:34.753 Cannot find device "nvmf_init_br2" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:34.753 Cannot find device "nvmf_tgt_br" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:34.753 Cannot find device "nvmf_tgt_br2" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:34.753 Cannot find device "nvmf_br" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:34.753 Cannot find device "nvmf_init_if" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:34.753 Cannot find device "nvmf_init_if2" 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.753 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:35.012 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:35.013 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.013 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:08:35.013 00:08:35.013 --- 10.0.0.3 ping statistics --- 00:08:35.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.013 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:35.013 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:35.013 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:08:35.013 00:08:35.013 --- 10.0.0.4 ping statistics --- 00:08:35.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.013 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:35.013 00:08:35.013 --- 10.0.0.1 ping statistics --- 00:08:35.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.013 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:35.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:35.013 00:08:35.013 --- 10.0.0.2 ping statistics --- 00:08:35.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.013 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=64822 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 64822 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64822 ']' 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.013 09:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.272 [2024-10-08 09:14:26.700398] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:35.272 [2024-10-08 09:14:26.700515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.272 [2024-10-08 09:14:26.847057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.531 [2024-10-08 09:14:27.002528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.531 [2024-10-08 09:14:27.002608] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.531 [2024-10-08 09:14:27.002637] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.531 [2024-10-08 09:14:27.002645] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.531 [2024-10-08 09:14:27.002653] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.531 [2024-10-08 09:14:27.003206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.531 [2024-10-08 09:14:27.083779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.102 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.102 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:36.102 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:36.102 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:36.102 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 [2024-10-08 09:14:27.806389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 Malloc0 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 [2024-10-08 09:14:27.874320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64854 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64854 /var/tmp/bdevperf.sock 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64854 ']' 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.362 09:14:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 [2024-10-08 09:14:27.938731] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:36.362 [2024-10-08 09:14:27.938859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64854 ] 00:08:36.622 [2024-10-08 09:14:28.077784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.622 [2024-10-08 09:14:28.195049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.622 [2024-10-08 09:14:28.257135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.568 09:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.568 09:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:37.568 09:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:37.568 09:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.568 09:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.568 NVMe0n1 00:08:37.568 09:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.568 09:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:37.568 Running I/O for 10 seconds... 00:08:39.513 6365.00 IOPS, 24.86 MiB/s [2024-10-08T09:14:32.575Z] 6932.00 IOPS, 27.08 MiB/s [2024-10-08T09:14:33.512Z] 7197.33 IOPS, 28.11 MiB/s [2024-10-08T09:14:34.457Z] 7438.50 IOPS, 29.06 MiB/s [2024-10-08T09:14:35.392Z] 7597.20 IOPS, 29.68 MiB/s [2024-10-08T09:14:36.329Z] 7710.83 IOPS, 30.12 MiB/s [2024-10-08T09:14:37.267Z] 7797.57 IOPS, 30.46 MiB/s [2024-10-08T09:14:38.228Z] 7951.88 IOPS, 31.06 MiB/s [2024-10-08T09:14:39.610Z] 7987.89 IOPS, 31.20 MiB/s [2024-10-08T09:14:39.610Z] 8029.80 IOPS, 31.37 MiB/s 00:08:47.927 Latency(us) 00:08:47.927 [2024-10-08T09:14:39.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.927 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:47.927 Verification LBA range: start 0x0 length 0x4000 00:08:47.927 NVMe0n1 : 10.07 8070.94 31.53 0.00 0.00 126279.95 15013.70 94848.47 00:08:47.927 [2024-10-08T09:14:39.610Z] =================================================================================================================== 00:08:47.927 [2024-10-08T09:14:39.610Z] Total : 8070.94 31.53 0.00 0.00 126279.95 15013.70 94848.47 00:08:47.927 { 00:08:47.927 "results": [ 00:08:47.927 { 00:08:47.927 "job": "NVMe0n1", 00:08:47.927 "core_mask": "0x1", 00:08:47.927 "workload": "verify", 00:08:47.927 "status": "finished", 00:08:47.927 "verify_range": { 00:08:47.927 "start": 0, 00:08:47.927 "length": 16384 00:08:47.927 }, 00:08:47.927 "queue_depth": 1024, 00:08:47.927 "io_size": 4096, 00:08:47.927 "runtime": 10.070698, 00:08:47.927 "iops": 8070.940067907904, 00:08:47.927 "mibps": 31.52710964026525, 00:08:47.927 "io_failed": 0, 00:08:47.927 "io_timeout": 0, 00:08:47.927 "avg_latency_us": 126279.95011023623, 00:08:47.927 "min_latency_us": 15013.701818181818, 00:08:47.927 "max_latency_us": 94848.46545454545 00:08:47.927 } 00:08:47.927 ], 00:08:47.927 "core_count": 1 00:08:47.927 } 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64854 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64854 ']' 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64854 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64854 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64854' 00:08:47.927 killing process with pid 64854 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64854 00:08:47.927 Received shutdown signal, test time was about 10.000000 seconds 00:08:47.927 00:08:47.927 Latency(us) 00:08:47.927 [2024-10-08T09:14:39.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.927 [2024-10-08T09:14:39.610Z] =================================================================================================================== 00:08:47.927 [2024-10-08T09:14:39.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64854 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:47.927 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.186 rmmod nvme_tcp 00:08:48.186 rmmod nvme_fabrics 00:08:48.186 rmmod nvme_keyring 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 64822 ']' 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 64822 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64822 ']' 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64822 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64822 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:48.186 killing process with pid 64822 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64822' 00:08:48.186 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64822 00:08:48.187 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64822 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:48.754 00:08:48.754 real 0m14.377s 00:08:48.754 user 0m23.972s 00:08:48.754 sys 0m2.745s 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.754 ************************************ 00:08:48.754 END TEST nvmf_queue_depth 00:08:48.754 ************************************ 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.754 09:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.015 ************************************ 00:08:49.015 START TEST nvmf_target_multipath 00:08:49.015 ************************************ 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:49.015 * Looking for test storage... 00:08:49.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:49.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.015 --rc genhtml_branch_coverage=1 00:08:49.015 --rc genhtml_function_coverage=1 00:08:49.015 --rc genhtml_legend=1 00:08:49.015 --rc geninfo_all_blocks=1 00:08:49.015 --rc geninfo_unexecuted_blocks=1 00:08:49.015 00:08:49.015 ' 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:49.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.015 --rc genhtml_branch_coverage=1 00:08:49.015 --rc genhtml_function_coverage=1 00:08:49.015 --rc genhtml_legend=1 00:08:49.015 --rc geninfo_all_blocks=1 00:08:49.015 --rc geninfo_unexecuted_blocks=1 00:08:49.015 00:08:49.015 ' 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:49.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.015 --rc genhtml_branch_coverage=1 00:08:49.015 --rc genhtml_function_coverage=1 00:08:49.015 --rc genhtml_legend=1 00:08:49.015 --rc geninfo_all_blocks=1 00:08:49.015 --rc geninfo_unexecuted_blocks=1 00:08:49.015 00:08:49.015 ' 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:49.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.015 --rc genhtml_branch_coverage=1 00:08:49.015 --rc genhtml_function_coverage=1 00:08:49.015 --rc genhtml_legend=1 00:08:49.015 --rc geninfo_all_blocks=1 00:08:49.015 --rc geninfo_unexecuted_blocks=1 00:08:49.015 00:08:49.015 ' 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.015 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:49.016 Cannot find device "nvmf_init_br" 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:49.016 Cannot find device "nvmf_init_br2" 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:49.016 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:49.276 Cannot find device "nvmf_tgt_br" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.276 Cannot find device "nvmf_tgt_br2" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:49.276 Cannot find device "nvmf_init_br" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:49.276 Cannot find device "nvmf_init_br2" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:49.276 Cannot find device "nvmf_tgt_br" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:49.276 Cannot find device "nvmf_tgt_br2" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:49.276 Cannot find device "nvmf_br" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:49.276 Cannot find device "nvmf_init_if" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:49.276 Cannot find device "nvmf_init_if2" 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.276 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.536 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.536 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:49.536 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:49.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:08:49.536 00:08:49.536 --- 10.0.0.3 ping statistics --- 00:08:49.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.536 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:49.536 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:49.536 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:08:49.536 00:08:49.536 --- 10.0.0.4 ping statistics --- 00:08:49.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.536 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:49.536 00:08:49.536 --- 10.0.0.1 ping statistics --- 00:08:49.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.536 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:49.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:08:49.536 00:08:49.536 --- 10.0.0.2 ping statistics --- 00:08:49.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.536 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=65237 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 65237 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 65237 ']' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.536 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.536 [2024-10-08 09:14:41.159228] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:49.536 [2024-10-08 09:14:41.159323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.795 [2024-10-08 09:14:41.302769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.796 [2024-10-08 09:14:41.435887] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.796 [2024-10-08 09:14:41.435992] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.796 [2024-10-08 09:14:41.436007] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.796 [2024-10-08 09:14:41.436019] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.796 [2024-10-08 09:14:41.436028] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.796 [2024-10-08 09:14:41.437565] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.796 [2024-10-08 09:14:41.437716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.796 [2024-10-08 09:14:41.437860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.796 [2024-10-08 09:14:41.438011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.054 [2024-10-08 09:14:41.500578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.623 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.623 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:50.623 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:50.623 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.623 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:50.623 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.623 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.882 [2024-10-08 09:14:42.533947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.882 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:51.207 Malloc0 00:08:51.207 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:51.465 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.724 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:52.291 [2024-10-08 09:14:43.672123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.291 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:52.291 [2024-10-08 09:14:43.968783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:52.550 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:52.550 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:52.807 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.807 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:52.807 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.808 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:52.808 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65332 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:54.737 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:54.737 [global] 00:08:54.737 thread=1 00:08:54.737 invalidate=1 00:08:54.737 rw=randrw 00:08:54.737 time_based=1 00:08:54.737 runtime=6 00:08:54.737 ioengine=libaio 00:08:54.737 direct=1 00:08:54.737 bs=4096 00:08:54.737 iodepth=128 00:08:54.737 norandommap=0 00:08:54.737 numjobs=1 00:08:54.737 00:08:54.737 verify_dump=1 00:08:54.737 verify_backlog=512 00:08:54.737 verify_state_save=0 00:08:54.737 do_verify=1 00:08:54.737 verify=crc32c-intel 00:08:54.737 [job0] 00:08:54.737 filename=/dev/nvme0n1 00:08:54.737 Could not set queue depth (nvme0n1) 00:08:54.996 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.996 fio-3.35 00:08:54.996 Starting 1 thread 00:08:55.932 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:56.192 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:56.451 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:56.710 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:56.968 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:56.969 09:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65332 00:09:01.158 00:09:01.158 job0: (groupid=0, jobs=1): err= 0: pid=65353: Tue Oct 8 09:14:52 2024 00:09:01.158 read: IOPS=9627, BW=37.6MiB/s (39.4MB/s)(226MiB/6006msec) 00:09:01.158 slat (usec): min=4, max=6714, avg=61.80, stdev=251.96 00:09:01.158 clat (usec): min=1407, max=18188, avg=9102.75, stdev=1664.78 00:09:01.158 lat (usec): min=1721, max=18199, avg=9164.55, stdev=1668.53 00:09:01.158 clat percentiles (usec): 00:09:01.158 | 1.00th=[ 4686], 5.00th=[ 6783], 10.00th=[ 7635], 20.00th=[ 8160], 00:09:01.158 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:09:01.158 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[12780], 00:09:01.158 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15401], 99.95th=[15926], 00:09:01.158 | 99.99th=[18220] 00:09:01.158 bw ( KiB/s): min= 9008, max=25264, per=51.12%, avg=19685.91, stdev=5159.28, samples=11 00:09:01.158 iops : min= 2252, max= 6316, avg=4921.45, stdev=1289.86, samples=11 00:09:01.158 write: IOPS=5463, BW=21.3MiB/s (22.4MB/s)(116MiB/5430msec); 0 zone resets 00:09:01.158 slat (usec): min=7, max=3576, avg=70.25, stdev=175.68 00:09:01.158 clat (usec): min=1869, max=18480, avg=7938.02, stdev=1526.02 00:09:01.158 lat (usec): min=1919, max=18507, avg=8008.27, stdev=1531.00 00:09:01.158 clat percentiles (usec): 00:09:01.158 | 1.00th=[ 3556], 5.00th=[ 4555], 10.00th=[ 5866], 20.00th=[ 7308], 00:09:01.158 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8356], 00:09:01.158 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9634], 00:09:01.158 | 99.00th=[12518], 99.50th=[13173], 99.90th=[14615], 99.95th=[16057], 00:09:01.158 | 99.99th=[18482] 00:09:01.158 bw ( KiB/s): min= 9168, max=25144, per=90.19%, avg=19712.82, stdev=5085.18, samples=11 00:09:01.158 iops : min= 2292, max= 6286, avg=4928.18, stdev=1271.34, samples=11 00:09:01.158 lat (msec) : 2=0.02%, 4=1.09%, 10=86.82%, 20=12.07% 00:09:01.158 cpu : usr=5.36%, sys=20.82%, ctx=4974, majf=0, minf=78 00:09:01.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:01.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.158 issued rwts: total=57823,29668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.158 00:09:01.158 Run status group 0 (all jobs): 00:09:01.158 READ: bw=37.6MiB/s (39.4MB/s), 37.6MiB/s-37.6MiB/s (39.4MB/s-39.4MB/s), io=226MiB (237MB), run=6006-6006msec 00:09:01.158 WRITE: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s), io=116MiB (122MB), run=5430-5430msec 00:09:01.158 00:09:01.158 Disk stats (read/write): 00:09:01.158 nvme0n1: ios=56927/29099, merge=0/0, ticks=497122/217183, in_queue=714305, util=98.60% 00:09:01.158 09:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:01.417 09:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65435 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:01.676 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:01.676 [global] 00:09:01.676 thread=1 00:09:01.676 invalidate=1 00:09:01.676 rw=randrw 00:09:01.676 time_based=1 00:09:01.676 runtime=6 00:09:01.676 ioengine=libaio 00:09:01.676 direct=1 00:09:01.676 bs=4096 00:09:01.676 iodepth=128 00:09:01.676 norandommap=0 00:09:01.676 numjobs=1 00:09:01.676 00:09:01.676 verify_dump=1 00:09:01.676 verify_backlog=512 00:09:01.676 verify_state_save=0 00:09:01.676 do_verify=1 00:09:01.676 verify=crc32c-intel 00:09:01.676 [job0] 00:09:01.676 filename=/dev/nvme0n1 00:09:01.676 Could not set queue depth (nvme0n1) 00:09:01.935 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.935 fio-3.35 00:09:01.935 Starting 1 thread 00:09:02.872 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:03.131 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:03.390 09:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:03.649 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65435 00:09:08.921 00:09:08.921 job0: (groupid=0, jobs=1): err= 0: pid=65460: Tue Oct 8 09:14:59 2024 00:09:08.921 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(235MiB/6006msec) 00:09:08.921 slat (usec): min=7, max=6199, avg=51.76, stdev=216.77 00:09:08.921 clat (usec): min=467, max=22091, avg=8926.11, stdev=2394.34 00:09:08.921 lat (usec): min=490, max=22101, avg=8977.86, stdev=2398.37 00:09:08.921 clat percentiles (usec): 00:09:08.921 | 1.00th=[ 2802], 5.00th=[ 4752], 10.00th=[ 6194], 20.00th=[ 7767], 00:09:08.921 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:09:08.921 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[11863], 95.00th=[13435], 00:09:08.921 | 99.00th=[15926], 99.50th=[17433], 99.90th=[20055], 99.95th=[21365], 00:09:08.921 | 99.99th=[21627] 00:09:08.921 bw ( KiB/s): min= 3888, max=26784, per=50.52%, avg=20228.00, stdev=7532.72, samples=11 00:09:08.921 iops : min= 972, max= 6696, avg=5057.00, stdev=1883.18, samples=11 00:09:08.921 write: IOPS=5737, BW=22.4MiB/s (23.5MB/s)(118MiB/5266msec); 0 zone resets 00:09:08.921 slat (usec): min=15, max=1887, avg=58.42, stdev=145.69 00:09:08.921 clat (usec): min=1286, max=21773, avg=7470.78, stdev=2148.17 00:09:08.921 lat (usec): min=1313, max=21798, avg=7529.21, stdev=2155.06 00:09:08.921 clat percentiles (usec): 00:09:08.921 | 1.00th=[ 2409], 5.00th=[ 3523], 10.00th=[ 4228], 20.00th=[ 5866], 00:09:08.921 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 8029], 00:09:08.921 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[10552], 00:09:08.921 | 99.00th=[13435], 99.50th=[14353], 99.90th=[16909], 99.95th=[17957], 00:09:08.921 | 99.99th=[20841] 00:09:08.921 bw ( KiB/s): min= 4104, max=26112, per=88.38%, avg=20282.64, stdev=7331.83, samples=11 00:09:08.921 iops : min= 1026, max= 6528, avg=5070.64, stdev=1832.94, samples=11 00:09:08.921 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.04% 00:09:08.921 lat (msec) : 2=0.39%, 4=4.34%, 10=79.35%, 20=15.77%, 50=0.07% 00:09:08.921 cpu : usr=5.43%, sys=21.52%, ctx=5327, majf=0, minf=90 00:09:08.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:08.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.921 issued rwts: total=60117,30213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.921 00:09:08.921 Run status group 0 (all jobs): 00:09:08.921 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=235MiB (246MB), run=6006-6006msec 00:09:08.921 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=118MiB (124MB), run=5266-5266msec 00:09:08.921 00:09:08.921 Disk stats (read/write): 00:09:08.921 nvme0n1: ios=59271/29692, merge=0/0, ticks=507740/208322, in_queue=716062, util=98.71% 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:08.921 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.921 rmmod nvme_tcp 00:09:08.921 rmmod nvme_fabrics 00:09:08.921 rmmod nvme_keyring 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 65237 ']' 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 65237 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 65237 ']' 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 65237 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65237 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.921 killing process with pid 65237 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65237' 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 65237 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 65237 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:08.921 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:09.181 ************************************ 00:09:09.181 END TEST nvmf_target_multipath 00:09:09.181 ************************************ 00:09:09.181 00:09:09.181 real 0m20.229s 00:09:09.181 user 1m15.883s 00:09:09.181 sys 0m8.673s 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.181 ************************************ 00:09:09.181 START TEST nvmf_zcopy 00:09:09.181 ************************************ 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:09.181 * Looking for test storage... 00:09:09.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.181 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.443 --rc genhtml_branch_coverage=1 00:09:09.443 --rc genhtml_function_coverage=1 00:09:09.443 --rc genhtml_legend=1 00:09:09.443 --rc geninfo_all_blocks=1 00:09:09.443 --rc geninfo_unexecuted_blocks=1 00:09:09.443 00:09:09.443 ' 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.443 --rc genhtml_branch_coverage=1 00:09:09.443 --rc genhtml_function_coverage=1 00:09:09.443 --rc genhtml_legend=1 00:09:09.443 --rc geninfo_all_blocks=1 00:09:09.443 --rc geninfo_unexecuted_blocks=1 00:09:09.443 00:09:09.443 ' 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.443 --rc genhtml_branch_coverage=1 00:09:09.443 --rc genhtml_function_coverage=1 00:09:09.443 --rc genhtml_legend=1 00:09:09.443 --rc geninfo_all_blocks=1 00:09:09.443 --rc geninfo_unexecuted_blocks=1 00:09:09.443 00:09:09.443 ' 00:09:09.443 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.443 --rc genhtml_branch_coverage=1 00:09:09.444 --rc genhtml_function_coverage=1 00:09:09.444 --rc genhtml_legend=1 00:09:09.444 --rc geninfo_all_blocks=1 00:09:09.444 --rc geninfo_unexecuted_blocks=1 00:09:09.444 00:09:09.444 ' 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.444 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.444 Cannot find device "nvmf_init_br" 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.444 Cannot find device "nvmf_init_br2" 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.444 Cannot find device "nvmf_tgt_br" 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:09.444 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.444 Cannot find device "nvmf_tgt_br2" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.444 Cannot find device "nvmf_init_br" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.444 Cannot find device "nvmf_init_br2" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.444 Cannot find device "nvmf_tgt_br" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.444 Cannot find device "nvmf_tgt_br2" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.444 Cannot find device "nvmf_br" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.444 Cannot find device "nvmf_init_if" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.444 Cannot find device "nvmf_init_if2" 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.444 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:09.445 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.445 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:09.445 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.445 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.445 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.445 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:09.731 00:09:09.731 --- 10.0.0.3 ping statistics --- 00:09:09.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.731 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.731 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.731 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:09.731 00:09:09.731 --- 10.0.0.4 ping statistics --- 00:09:09.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.731 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:09.731 00:09:09.731 --- 10.0.0.1 ping statistics --- 00:09:09.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.731 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:09:09.731 00:09:09.731 --- 10.0.0.2 ping statistics --- 00:09:09.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.731 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.731 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=65764 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 65764 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65764 ']' 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.990 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.990 [2024-10-08 09:15:01.477234] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:09:09.990 [2024-10-08 09:15:01.477368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.990 [2024-10-08 09:15:01.617763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.249 [2024-10-08 09:15:01.731634] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.249 [2024-10-08 09:15:01.731707] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.249 [2024-10-08 09:15:01.731721] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.249 [2024-10-08 09:15:01.731745] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.249 [2024-10-08 09:15:01.731757] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.249 [2024-10-08 09:15:01.732225] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.249 [2024-10-08 09:15:01.793259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.817 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.817 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:10.817 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:10.817 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.817 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.076 [2024-10-08 09:15:02.509875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.076 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.076 [2024-10-08 09:15:02.526046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.077 malloc0 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:11.077 { 00:09:11.077 "params": { 00:09:11.077 "name": "Nvme$subsystem", 00:09:11.077 "trtype": "$TEST_TRANSPORT", 00:09:11.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.077 "adrfam": "ipv4", 00:09:11.077 "trsvcid": "$NVMF_PORT", 00:09:11.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.077 "hdgst": ${hdgst:-false}, 00:09:11.077 "ddgst": ${ddgst:-false} 00:09:11.077 }, 00:09:11.077 "method": "bdev_nvme_attach_controller" 00:09:11.077 } 00:09:11.077 EOF 00:09:11.077 )") 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:11.077 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:11.077 "params": { 00:09:11.077 "name": "Nvme1", 00:09:11.077 "trtype": "tcp", 00:09:11.077 "traddr": "10.0.0.3", 00:09:11.077 "adrfam": "ipv4", 00:09:11.077 "trsvcid": "4420", 00:09:11.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.077 "hdgst": false, 00:09:11.077 "ddgst": false 00:09:11.077 }, 00:09:11.077 "method": "bdev_nvme_attach_controller" 00:09:11.077 }' 00:09:11.077 [2024-10-08 09:15:02.632166] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:09:11.077 [2024-10-08 09:15:02.632275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65797 ] 00:09:11.336 [2024-10-08 09:15:02.774562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.336 [2024-10-08 09:15:02.901062] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.336 [2024-10-08 09:15:02.969776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.595 Running I/O for 10 seconds... 00:09:13.468 5843.00 IOPS, 45.65 MiB/s [2024-10-08T09:15:06.134Z] 5664.50 IOPS, 44.25 MiB/s [2024-10-08T09:15:07.511Z] 5817.33 IOPS, 45.45 MiB/s [2024-10-08T09:15:08.446Z] 5905.75 IOPS, 46.14 MiB/s [2024-10-08T09:15:09.381Z] 5947.40 IOPS, 46.46 MiB/s [2024-10-08T09:15:10.317Z] 6003.83 IOPS, 46.90 MiB/s [2024-10-08T09:15:11.251Z] 6051.57 IOPS, 47.28 MiB/s [2024-10-08T09:15:12.185Z] 6068.25 IOPS, 47.41 MiB/s [2024-10-08T09:15:13.119Z] 6082.44 IOPS, 47.52 MiB/s [2024-10-08T09:15:13.119Z] 6076.70 IOPS, 47.47 MiB/s 00:09:21.436 Latency(us) 00:09:21.436 [2024-10-08T09:15:13.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.436 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:21.436 Verification LBA range: start 0x0 length 0x1000 00:09:21.436 Nvme1n1 : 10.01 6080.18 47.50 0.00 0.00 20987.72 2725.70 35985.22 00:09:21.436 [2024-10-08T09:15:13.119Z] =================================================================================================================== 00:09:21.436 [2024-10-08T09:15:13.119Z] Total : 6080.18 47.50 0.00 0.00 20987.72 2725.70 35985.22 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65914 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:22.002 { 00:09:22.002 "params": { 00:09:22.002 "name": "Nvme$subsystem", 00:09:22.002 "trtype": "$TEST_TRANSPORT", 00:09:22.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.002 "adrfam": "ipv4", 00:09:22.002 "trsvcid": "$NVMF_PORT", 00:09:22.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.002 "hdgst": ${hdgst:-false}, 00:09:22.002 "ddgst": ${ddgst:-false} 00:09:22.002 }, 00:09:22.002 "method": "bdev_nvme_attach_controller" 00:09:22.002 } 00:09:22.002 EOF 00:09:22.002 )") 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:22.002 [2024-10-08 09:15:13.424194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.002 [2024-10-08 09:15:13.424253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:22.002 09:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:22.002 "params": { 00:09:22.002 "name": "Nvme1", 00:09:22.002 "trtype": "tcp", 00:09:22.002 "traddr": "10.0.0.3", 00:09:22.002 "adrfam": "ipv4", 00:09:22.002 "trsvcid": "4420", 00:09:22.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.002 "hdgst": false, 00:09:22.002 "ddgst": false 00:09:22.002 }, 00:09:22.002 "method": "bdev_nvme_attach_controller" 00:09:22.002 }' 00:09:22.002 [2024-10-08 09:15:13.436176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.002 [2024-10-08 09:15:13.436220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.448140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.448181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.460163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.460223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.472134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.472160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.481073] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:09:22.003 [2024-10-08 09:15:13.481185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65914 ] 00:09:22.003 [2024-10-08 09:15:13.484135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.484160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.496182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.496224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.508220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.508245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.520160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.520232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.532195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.532238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.544169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.544209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.556205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.556245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.568191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.568230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.580218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.580244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.592257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.592298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.604216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.604267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.616201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.616239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.618123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.003 [2024-10-08 09:15:13.628264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.628304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.640237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.640275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.652223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.652262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.664230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.664268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.003 [2024-10-08 09:15:13.676253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.003 [2024-10-08 09:15:13.676294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.261 [2024-10-08 09:15:13.688237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.261 [2024-10-08 09:15:13.688277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.700254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.700292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.712242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.712282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.724253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.724292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.727599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.262 [2024-10-08 09:15:13.736260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.736302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.748260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.748299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.760260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.760299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.772266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.772304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.784289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.784329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.796282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.796323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.808276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.808315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.811850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.262 [2024-10-08 09:15:13.820278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.820318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.832284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.832321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.844290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.844330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.856288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.856326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.868297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.868321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.880321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.880366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.892323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.892366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.904331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.904375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.916340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.916383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.928341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.928382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 [2024-10-08 09:15:13.940375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.262 [2024-10-08 09:15:13.940437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.262 Running I/O for 5 seconds... 00:09:22.520 [2024-10-08 09:15:13.958077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.520 [2024-10-08 09:15:13.958111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.520 [2024-10-08 09:15:13.973507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.520 [2024-10-08 09:15:13.973555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.520 [2024-10-08 09:15:13.984589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.520 [2024-10-08 09:15:13.984636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.520 [2024-10-08 09:15:14.000107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.520 [2024-10-08 09:15:14.000141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.520 [2024-10-08 09:15:14.016974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.520 [2024-10-08 09:15:14.017005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.032571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.032617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.041700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.041783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.058234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.058268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.077171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.077219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.091794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.091839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.101539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.101585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.115971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.116033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.132538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.132585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.147668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.147715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.164134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.164165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.181926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.181972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.521 [2024-10-08 09:15:14.196318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.521 [2024-10-08 09:15:14.196365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.213497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.213544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.227606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.227653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.242706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.242770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.252035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.252064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.267646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.267692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.283544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.283590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.301169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.301216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.318309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.318374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.335438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.335484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.351046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.351076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.366463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.366510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.384724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.384780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.399414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.399463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.414964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.414993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.433091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.433151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.779 [2024-10-08 09:15:14.447478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.779 [2024-10-08 09:15:14.447523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.464979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.465025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.478959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.478989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.494261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.494325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.511815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.511869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.527408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.527454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.537407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.537453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.553135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.553181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.571464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.571509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.586232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.586264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.604611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.604658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.618576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.618622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.633911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.633957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.652894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.652927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.667900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.667945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.685474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.685523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.702579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.702625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.039 [2024-10-08 09:15:14.719017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.039 [2024-10-08 09:15:14.719052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.735138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.735200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.753166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.753211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.767253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.767299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.782577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.782622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.791832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.791862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.808230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.808277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.823349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.823396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.840841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.840872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.855123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.855169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.870546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.870592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.880151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.880197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.895606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.895652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.910364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.910410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.927128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.927174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.943033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.943078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 11878.00 IOPS, 92.80 MiB/s [2024-10-08T09:15:14.980Z] [2024-10-08 09:15:14.961090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.961152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.297 [2024-10-08 09:15:14.977077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.297 [2024-10-08 09:15:14.977108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:14.995404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:14.995450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.009652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.009703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.025639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.025687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.042124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.042156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.058984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.059015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.074297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.074330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.090048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.090087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.106027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.106062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.124092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.124154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.139273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.139322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.148363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.148392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.164471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.164517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.174563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.174610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.190500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.190550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.204433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.204479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.556 [2024-10-08 09:15:15.220305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.556 [2024-10-08 09:15:15.220353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.239102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.239152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.253511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.253556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.268175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.268221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.283704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.283760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.301801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.301846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.317619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.317664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.335931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.335966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.350663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.350710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.367331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.367366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.383788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.383839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.402555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.402602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.416211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.416258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.433099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.433146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.448703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.448760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.465073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.465103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.480603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.480649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.814 [2024-10-08 09:15:15.490220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.814 [2024-10-08 09:15:15.490252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.506025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.506072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.524253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.524300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.539307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.539352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.555948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.555981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.572996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.573027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.590989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.591020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.605073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.605118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.620925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.620955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.637112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.637158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.655184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.655231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.669978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.670050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.684927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.684956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.693580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.693624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.710132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.710164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.727563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.727610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.073 [2024-10-08 09:15:15.742254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.073 [2024-10-08 09:15:15.742287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.757571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.757617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.767447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.767482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.783467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.783531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.801175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.801223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.817166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.817212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.835110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.835159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.850069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.850111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.861220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.861267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.877935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.877980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.893634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.893680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.912974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.913034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.926699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.926744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.943053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.943111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 11847.00 IOPS, 92.55 MiB/s [2024-10-08T09:15:16.015Z] [2024-10-08 09:15:15.959185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.959231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.974910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.974940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:15.984296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:15.984342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.332 [2024-10-08 09:15:16.009071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.332 [2024-10-08 09:15:16.009119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.590 [2024-10-08 09:15:16.026577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.590 [2024-10-08 09:15:16.026623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.590 [2024-10-08 09:15:16.042550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.590 [2024-10-08 09:15:16.042595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.590 [2024-10-08 09:15:16.061254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.590 [2024-10-08 09:15:16.061302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.590 [2024-10-08 09:15:16.075834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.590 [2024-10-08 09:15:16.075869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.590 [2024-10-08 09:15:16.092103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.590 [2024-10-08 09:15:16.092152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.590 [2024-10-08 09:15:16.108632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.590 [2024-10-08 09:15:16.108666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.590 [2024-10-08 09:15:16.126865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.590 [2024-10-08 09:15:16.126900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.141357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.141405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.151317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.151350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.166590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.166811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.176433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.176469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.188011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.188049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.203452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.203489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.212802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.212860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.229160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.229321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.244935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.244969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.591 [2024-10-08 09:15:16.261506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.591 [2024-10-08 09:15:16.261540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.276515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.276687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.293084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.293118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.309616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.309651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.326744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.326806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.340732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.340808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.356645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.356678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.372790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.372820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.388551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.388585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.405798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.405832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.421232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.421268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.437006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.437053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.454049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.454086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.471352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.471567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.487489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.487652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.497692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.497891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.511945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.512122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.849 [2024-10-08 09:15:16.521365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.849 [2024-10-08 09:15:16.521517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.536947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.537138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.554713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.554942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.570097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.570249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.579982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.580202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.596240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.596394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.613241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.613397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.630344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.630526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.645692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.645887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.661185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.661337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.672627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.672806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.688735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.688920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.704560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.704713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.722241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.722443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.737877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.738061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.749132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.749284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.763969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.764021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.108 [2024-10-08 09:15:16.778976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.108 [2024-10-08 09:15:16.779010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.797913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.797947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.812720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.812783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.822593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.822627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.837433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.837593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.855257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.855293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.871917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.871950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.888866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.888900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.905610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.905644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.922146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.922184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.940075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.940110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 11933.33 IOPS, 93.23 MiB/s [2024-10-08T09:15:17.050Z] [2024-10-08 09:15:16.956186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.956219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.972845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.972883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:16.988995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:16.989030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:17.007374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:17.007553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:17.021522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:17.021556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.367 [2024-10-08 09:15:17.037676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.367 [2024-10-08 09:15:17.037712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.056937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.056982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.072060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.072095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.090456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.090619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.104323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.104358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.119379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.119414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.130715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.130780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.146846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.146880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.163605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.163639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.179209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.179242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.191124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.191164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.208081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.208132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.223712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.223783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.241107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.241271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.256627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.256842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.266235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.266273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.281392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.281428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.291554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.291603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.643 [2024-10-08 09:15:17.307182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.643 [2024-10-08 09:15:17.307220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.323280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.323318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.340057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.340090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.357208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.357262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.373300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.373337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.389819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.389878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.406681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.406715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.423551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.423724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.439765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.439827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.457968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.458010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.472681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.472714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.489599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.489804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.503575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.503608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.519080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.519145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.528865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.528902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.544630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.544819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.560023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.560277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.576943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.576984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.912 [2024-10-08 09:15:17.593565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.912 [2024-10-08 09:15:17.593619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.609334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.609369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.626820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.626868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.641443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.641478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.657001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.657037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.675096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.675285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.689478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.689512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.706159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.706338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.720567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.720600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.736882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.736916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.752609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.752800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.762296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.762362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.777159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.777192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.793339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.793374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.809726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.809800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.826705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.826784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.842123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.842160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.171 [2024-10-08 09:15:17.851505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.171 [2024-10-08 09:15:17.851538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.867284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.867319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.884053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.884252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.899916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.899951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.917430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.917464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.934223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.934260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 11904.50 IOPS, 93.00 MiB/s [2024-10-08T09:15:18.113Z] [2024-10-08 09:15:17.952285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.952323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.967090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.967124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.983979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.984013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:17.999555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:17.999588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:18.010976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:18.011011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:18.027762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:18.027819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:18.043209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:18.043242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:18.052547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:18.052593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:18.068510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:18.068546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:18.084460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:18.084495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.430 [2024-10-08 09:15:18.101704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.430 [2024-10-08 09:15:18.101789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.117829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.117866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.134663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.134698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.150062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.150100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.168968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.169005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.183120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.183161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.199062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.199097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.216996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.217032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.231936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.232120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.248168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.248208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.265974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.266048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.280869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.280905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.296770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.296824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.314157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.314195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.329997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.330076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.347158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.347312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.689 [2024-10-08 09:15:18.363552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.689 [2024-10-08 09:15:18.363591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.373221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.373259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.389573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.389609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.406126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.406281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.424385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.424422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.439724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.439789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.457151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.457331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.473686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.473723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.489878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.947 [2024-10-08 09:15:18.489916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.947 [2024-10-08 09:15:18.507353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.507391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.523009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.523046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.533062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.533099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.548168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.548317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.563856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.563894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.580570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.580608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.598104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.598142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.613371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.613410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.948 [2024-10-08 09:15:18.623757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.948 [2024-10-08 09:15:18.623808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.636594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.636631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.651666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.651868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.661232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.661267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.676957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.676999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.688700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.688765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.705439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.705483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.721647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.721693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.738684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.738718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.754978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.755012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.770893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.770927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.781655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.781689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.798662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.798848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.812436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.812475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.829256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.829301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.843922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.843956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.861546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.861580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.206 [2024-10-08 09:15:18.876295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.206 [2024-10-08 09:15:18.876328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.464 [2024-10-08 09:15:18.895435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.464 [2024-10-08 09:15:18.895469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.464 [2024-10-08 09:15:18.910688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.464 [2024-10-08 09:15:18.910878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.464 [2024-10-08 09:15:18.920447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.464 [2024-10-08 09:15:18.920511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.464 [2024-10-08 09:15:18.937515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.464 [2024-10-08 09:15:18.937551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.464 11872.60 IOPS, 92.75 MiB/s [2024-10-08T09:15:19.147Z] [2024-10-08 09:15:18.951360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.464 [2024-10-08 09:15:18.951399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.464 00:09:27.464 Latency(us) 00:09:27.464 [2024-10-08T09:15:19.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.464 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:27.464 Nvme1n1 : 5.01 11873.35 92.76 0.00 0.00 10766.23 3813.00 20256.58 00:09:27.464 [2024-10-08T09:15:19.147Z] =================================================================================================================== 00:09:27.464 [2024-10-08T09:15:19.147Z] Total : 11873.35 92.76 0.00 0.00 10766.23 3813.00 20256.58 00:09:27.464 [2024-10-08 09:15:18.961285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:18.961440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:18.973298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:18.973488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:18.985289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:18.985451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:18.997295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:18.997462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.009278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.009306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.021285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.021448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.033288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.033318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.045305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.045509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.057292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.057319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.069290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.069317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.081290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.081316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.093293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.093319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.105296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.105323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.117298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.117324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.129317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.129343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.465 [2024-10-08 09:15:19.141304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.465 [2024-10-08 09:15:19.141330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.153309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.153336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.165310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.165336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.177312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.177339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.189319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.189348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.201316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.201343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.213319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.213346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.225322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.225360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.237326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.237351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.249329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.249354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.261353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.261380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 [2024-10-08 09:15:19.273630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.723 [2024-10-08 09:15:19.273664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.723 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65914) - No such process 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65914 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.723 delay0 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.723 09:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:27.981 [2024-10-08 09:15:19.472501] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:34.567 Initializing NVMe Controllers 00:09:34.567 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.567 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:34.567 Initialization complete. Launching workers. 00:09:34.567 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 828 00:09:34.567 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1115, failed to submit 33 00:09:34.567 success 999, unsuccessful 116, failed 0 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.567 rmmod nvme_tcp 00:09:34.567 rmmod nvme_fabrics 00:09:34.567 rmmod nvme_keyring 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 65764 ']' 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 65764 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65764 ']' 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65764 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65764 00:09:34.567 killing process with pid 65764 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65764' 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65764 00:09:34.567 09:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65764 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.567 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:34.825 00:09:34.825 real 0m25.576s 00:09:34.825 user 0m41.104s 00:09:34.825 sys 0m7.320s 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.825 ************************************ 00:09:34.825 END TEST nvmf_zcopy 00:09:34.825 ************************************ 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.825 ************************************ 00:09:34.825 START TEST nvmf_nmic 00:09:34.825 ************************************ 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.825 * Looking for test storage... 00:09:34.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.825 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.826 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.826 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.084 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.085 --rc genhtml_branch_coverage=1 00:09:35.085 --rc genhtml_function_coverage=1 00:09:35.085 --rc genhtml_legend=1 00:09:35.085 --rc geninfo_all_blocks=1 00:09:35.085 --rc geninfo_unexecuted_blocks=1 00:09:35.085 00:09:35.085 ' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.085 --rc genhtml_branch_coverage=1 00:09:35.085 --rc genhtml_function_coverage=1 00:09:35.085 --rc genhtml_legend=1 00:09:35.085 --rc geninfo_all_blocks=1 00:09:35.085 --rc geninfo_unexecuted_blocks=1 00:09:35.085 00:09:35.085 ' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.085 --rc genhtml_branch_coverage=1 00:09:35.085 --rc genhtml_function_coverage=1 00:09:35.085 --rc genhtml_legend=1 00:09:35.085 --rc geninfo_all_blocks=1 00:09:35.085 --rc geninfo_unexecuted_blocks=1 00:09:35.085 00:09:35.085 ' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.085 --rc genhtml_branch_coverage=1 00:09:35.085 --rc genhtml_function_coverage=1 00:09:35.085 --rc genhtml_legend=1 00:09:35.085 --rc geninfo_all_blocks=1 00:09:35.085 --rc geninfo_unexecuted_blocks=1 00:09:35.085 00:09:35.085 ' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.085 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.085 Cannot find device "nvmf_init_br" 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.085 Cannot find device "nvmf_init_br2" 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:35.085 Cannot find device "nvmf_tgt_br" 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.085 Cannot find device "nvmf_tgt_br2" 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:35.085 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.085 Cannot find device "nvmf_init_br" 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.086 Cannot find device "nvmf_init_br2" 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.086 Cannot find device "nvmf_tgt_br" 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.086 Cannot find device "nvmf_tgt_br2" 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.086 Cannot find device "nvmf_br" 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.086 Cannot find device "nvmf_init_if" 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.086 Cannot find device "nvmf_init_if2" 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:35.086 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:09:35.344 00:09:35.344 --- 10.0.0.3 ping statistics --- 00:09:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.344 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.344 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.344 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:35.344 00:09:35.344 --- 10.0.0.4 ping statistics --- 00:09:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.344 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:09:35.344 00:09:35.344 --- 10.0.0.1 ping statistics --- 00:09:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.344 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:35.344 09:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:09:35.344 00:09:35.344 --- 10.0.0.2 ping statistics --- 00:09:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.344 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:35.344 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=66303 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 66303 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 66303 ']' 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.602 09:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.602 [2024-10-08 09:15:27.133012] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:09:35.602 [2024-10-08 09:15:27.133148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.861 [2024-10-08 09:15:27.287800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.861 [2024-10-08 09:15:27.405652] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.861 [2024-10-08 09:15:27.405998] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.861 [2024-10-08 09:15:27.406167] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.861 [2024-10-08 09:15:27.406234] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.861 [2024-10-08 09:15:27.406345] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.861 [2024-10-08 09:15:27.407657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.861 [2024-10-08 09:15:27.407800] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.861 [2024-10-08 09:15:27.407920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.861 [2024-10-08 09:15:27.407971] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.861 [2024-10-08 09:15:27.465893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 [2024-10-08 09:15:28.227650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 Malloc0 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 [2024-10-08 09:15:28.286889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:36.796 test case1: single bdev can't be used in multiple subsystems 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 [2024-10-08 09:15:28.310745] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:36.796 [2024-10-08 09:15:28.310792] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:36.796 [2024-10-08 09:15:28.310805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.796 request: 00:09:36.796 { 00:09:36.796 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:36.796 "namespace": { 00:09:36.796 "bdev_name": "Malloc0", 00:09:36.796 "no_auto_visible": false 00:09:36.796 }, 00:09:36.796 "method": "nvmf_subsystem_add_ns", 00:09:36.796 "req_id": 1 00:09:36.796 } 00:09:36.796 Got JSON-RPC error response 00:09:36.796 response: 00:09:36.796 { 00:09:36.796 "code": -32602, 00:09:36.796 "message": "Invalid parameters" 00:09:36.796 } 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:36.796 Adding namespace failed - expected result. 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:36.796 test case2: host connect to nvmf target in multiple paths 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 [2024-10-08 09:15:28.326895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:36.796 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:37.055 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.055 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.055 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.055 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:37.055 09:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:38.984 09:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:38.984 09:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:38.984 09:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.984 09:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:38.984 09:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.984 09:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:38.984 09:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:38.984 [global] 00:09:38.984 thread=1 00:09:38.984 invalidate=1 00:09:38.984 rw=write 00:09:38.984 time_based=1 00:09:38.984 runtime=1 00:09:38.984 ioengine=libaio 00:09:38.984 direct=1 00:09:38.984 bs=4096 00:09:38.984 iodepth=1 00:09:38.984 norandommap=0 00:09:38.984 numjobs=1 00:09:38.984 00:09:38.984 verify_dump=1 00:09:38.984 verify_backlog=512 00:09:38.984 verify_state_save=0 00:09:38.984 do_verify=1 00:09:38.984 verify=crc32c-intel 00:09:38.984 [job0] 00:09:38.984 filename=/dev/nvme0n1 00:09:38.984 Could not set queue depth (nvme0n1) 00:09:39.243 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.243 fio-3.35 00:09:39.243 Starting 1 thread 00:09:40.617 00:09:40.617 job0: (groupid=0, jobs=1): err= 0: pid=66390: Tue Oct 8 09:15:31 2024 00:09:40.617 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:40.617 slat (nsec): min=11860, max=44132, avg=14259.24, stdev=3253.53 00:09:40.617 clat (usec): min=132, max=255, avg=171.20, stdev=16.62 00:09:40.617 lat (usec): min=145, max=299, avg=185.46, stdev=16.91 00:09:40.617 clat percentiles (usec): 00:09:40.617 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:09:40.617 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:40.617 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:09:40.617 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 235], 99.95th=[ 243], 00:09:40.617 | 99.99th=[ 255] 00:09:40.617 write: IOPS=3243, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:09:40.617 slat (nsec): min=14808, max=99815, avg=21922.04, stdev=5378.83 00:09:40.617 clat (usec): min=82, max=195, avg=106.92, stdev=13.38 00:09:40.617 lat (usec): min=101, max=295, avg=128.85, stdev=15.18 00:09:40.617 clat percentiles (usec): 00:09:40.617 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 96], 00:09:40.617 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 108], 00:09:40.617 | 70.00th=[ 112], 80.00th=[ 118], 90.00th=[ 126], 95.00th=[ 133], 00:09:40.617 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 165], 00:09:40.617 | 99.99th=[ 196] 00:09:40.617 bw ( KiB/s): min=12656, max=12656, per=97.54%, avg=12656.00, stdev= 0.00, samples=1 00:09:40.617 iops : min= 3164, max= 3164, avg=3164.00, stdev= 0.00, samples=1 00:09:40.617 lat (usec) : 100=17.69%, 250=82.29%, 500=0.02% 00:09:40.617 cpu : usr=3.30%, sys=8.30%, ctx=6319, majf=0, minf=5 00:09:40.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.617 issued rwts: total=3072,3247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.617 00:09:40.617 Run status group 0 (all jobs): 00:09:40.617 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:40.617 WRITE: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:09:40.617 00:09:40.617 Disk stats (read/write): 00:09:40.617 nvme0n1: ios=2694/3072, merge=0/0, ticks=502/353, in_queue=855, util=91.07% 00:09:40.617 09:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:40.617 09:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.617 09:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:40.617 09:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:40.617 09:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.617 09:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.617 09:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.617 rmmod nvme_tcp 00:09:40.617 rmmod nvme_fabrics 00:09:40.617 rmmod nvme_keyring 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:40.617 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 66303 ']' 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 66303 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 66303 ']' 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 66303 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66303 00:09:40.618 killing process with pid 66303 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66303' 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 66303 00:09:40.618 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 66303 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:40.876 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:41.134 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:41.134 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.134 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.134 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:41.134 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.134 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:41.135 00:09:41.135 real 0m6.325s 00:09:41.135 user 0m19.000s 00:09:41.135 sys 0m2.473s 00:09:41.135 ************************************ 00:09:41.135 END TEST nvmf_nmic 00:09:41.135 ************************************ 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.135 ************************************ 00:09:41.135 START TEST nvmf_fio_target 00:09:41.135 ************************************ 00:09:41.135 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:41.135 * Looking for test storage... 00:09:41.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:41.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.394 --rc genhtml_branch_coverage=1 00:09:41.394 --rc genhtml_function_coverage=1 00:09:41.394 --rc genhtml_legend=1 00:09:41.394 --rc geninfo_all_blocks=1 00:09:41.394 --rc geninfo_unexecuted_blocks=1 00:09:41.394 00:09:41.394 ' 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:41.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.394 --rc genhtml_branch_coverage=1 00:09:41.394 --rc genhtml_function_coverage=1 00:09:41.394 --rc genhtml_legend=1 00:09:41.394 --rc geninfo_all_blocks=1 00:09:41.394 --rc geninfo_unexecuted_blocks=1 00:09:41.394 00:09:41.394 ' 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:41.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.394 --rc genhtml_branch_coverage=1 00:09:41.394 --rc genhtml_function_coverage=1 00:09:41.394 --rc genhtml_legend=1 00:09:41.394 --rc geninfo_all_blocks=1 00:09:41.394 --rc geninfo_unexecuted_blocks=1 00:09:41.394 00:09:41.394 ' 00:09:41.394 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:41.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.394 --rc genhtml_branch_coverage=1 00:09:41.394 --rc genhtml_function_coverage=1 00:09:41.395 --rc genhtml_legend=1 00:09:41.395 --rc geninfo_all_blocks=1 00:09:41.395 --rc geninfo_unexecuted_blocks=1 00:09:41.395 00:09:41.395 ' 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.395 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.395 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:41.396 Cannot find device "nvmf_init_br" 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:41.396 Cannot find device "nvmf_init_br2" 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:41.396 Cannot find device "nvmf_tgt_br" 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:41.396 09:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.396 Cannot find device "nvmf_tgt_br2" 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:41.396 Cannot find device "nvmf_init_br" 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:41.396 Cannot find device "nvmf_init_br2" 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:41.396 Cannot find device "nvmf_tgt_br" 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:41.396 Cannot find device "nvmf_tgt_br2" 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:41.396 Cannot find device "nvmf_br" 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:41.396 Cannot find device "nvmf_init_if" 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:41.396 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:41.654 Cannot find device "nvmf_init_if2" 00:09:41.654 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:41.654 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.654 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:41.655 00:09:41.655 --- 10.0.0.3 ping statistics --- 00:09:41.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.655 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.655 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.655 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:41.655 00:09:41.655 --- 10.0.0.4 ping statistics --- 00:09:41.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.655 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:41.655 00:09:41.655 --- 10.0.0.1 ping statistics --- 00:09:41.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.655 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:09:41.655 00:09:41.655 --- 10.0.0.2 ping statistics --- 00:09:41.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.655 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:41.655 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=66624 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 66624 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 66624 ']' 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.914 09:15:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.914 [2024-10-08 09:15:33.417613] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:09:41.914 [2024-10-08 09:15:33.417936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.914 [2024-10-08 09:15:33.561934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.173 [2024-10-08 09:15:33.683642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.173 [2024-10-08 09:15:33.683926] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.173 [2024-10-08 09:15:33.684087] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.173 [2024-10-08 09:15:33.684241] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.173 [2024-10-08 09:15:33.684285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.173 [2024-10-08 09:15:33.685714] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.173 [2024-10-08 09:15:33.685933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.173 [2024-10-08 09:15:33.685857] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.173 [2024-10-08 09:15:33.685929] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.173 [2024-10-08 09:15:33.743009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.107 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.107 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:43.107 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:43.107 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.107 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.107 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.107 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:43.365 [2024-10-08 09:15:34.806620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.365 09:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.636 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:43.636 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.907 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:43.907 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.166 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:44.166 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.424 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:44.424 09:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:44.682 09:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.941 09:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:44.941 09:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.200 09:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:45.200 09:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.458 09:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:45.458 09:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:45.717 09:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:45.974 09:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:45.974 09:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.232 09:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:46.232 09:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.490 09:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:46.748 [2024-10-08 09:15:38.353728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:46.748 09:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:47.005 09:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:47.262 09:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:47.520 09:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:47.520 09:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:47.520 09:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.520 09:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:47.520 09:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:47.520 09:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:49.419 09:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:49.419 09:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:49.419 09:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.419 09:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:49.419 09:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.419 09:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:49.419 09:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:49.419 [global] 00:09:49.419 thread=1 00:09:49.419 invalidate=1 00:09:49.419 rw=write 00:09:49.419 time_based=1 00:09:49.419 runtime=1 00:09:49.419 ioengine=libaio 00:09:49.419 direct=1 00:09:49.419 bs=4096 00:09:49.419 iodepth=1 00:09:49.419 norandommap=0 00:09:49.419 numjobs=1 00:09:49.419 00:09:49.419 verify_dump=1 00:09:49.419 verify_backlog=512 00:09:49.419 verify_state_save=0 00:09:49.419 do_verify=1 00:09:49.419 verify=crc32c-intel 00:09:49.419 [job0] 00:09:49.419 filename=/dev/nvme0n1 00:09:49.419 [job1] 00:09:49.419 filename=/dev/nvme0n2 00:09:49.419 [job2] 00:09:49.419 filename=/dev/nvme0n3 00:09:49.419 [job3] 00:09:49.419 filename=/dev/nvme0n4 00:09:49.677 Could not set queue depth (nvme0n1) 00:09:49.677 Could not set queue depth (nvme0n2) 00:09:49.677 Could not set queue depth (nvme0n3) 00:09:49.677 Could not set queue depth (nvme0n4) 00:09:49.677 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.677 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.677 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.677 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.677 fio-3.35 00:09:49.677 Starting 4 threads 00:09:51.049 00:09:51.049 job0: (groupid=0, jobs=1): err= 0: pid=66819: Tue Oct 8 09:15:42 2024 00:09:51.049 read: IOPS=2006, BW=8028KiB/s (8221kB/s)(8036KiB/1001msec) 00:09:51.049 slat (nsec): min=12345, max=42371, avg=14476.86, stdev=2801.46 00:09:51.049 clat (usec): min=159, max=548, avg=265.35, stdev=25.14 00:09:51.049 lat (usec): min=172, max=561, avg=279.83, stdev=25.73 00:09:51.049 clat percentiles (usec): 00:09:51.049 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 251], 00:09:51.049 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:09:51.049 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:09:51.049 | 99.00th=[ 379], 99.50th=[ 404], 99.90th=[ 445], 99.95th=[ 478], 00:09:51.049 | 99.99th=[ 553] 00:09:51.049 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:51.049 slat (nsec): min=15674, max=77882, avg=21654.98, stdev=4241.81 00:09:51.049 clat (usec): min=100, max=772, avg=188.53, stdev=26.26 00:09:51.049 lat (usec): min=119, max=791, avg=210.18, stdev=27.57 00:09:51.049 clat percentiles (usec): 00:09:51.049 | 1.00th=[ 120], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 176], 00:09:51.049 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:09:51.049 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:09:51.049 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 355], 99.95th=[ 359], 00:09:51.049 | 99.99th=[ 775] 00:09:51.049 bw ( KiB/s): min= 8192, max= 8192, per=20.16%, avg=8192.00, stdev= 0.00, samples=1 00:09:51.049 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:51.049 lat (usec) : 250=59.48%, 500=40.47%, 750=0.02%, 1000=0.02% 00:09:51.049 cpu : usr=1.40%, sys=5.90%, ctx=4057, majf=0, minf=7 00:09:51.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.049 issued rwts: total=2009,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.049 job1: (groupid=0, jobs=1): err= 0: pid=66820: Tue Oct 8 09:15:42 2024 00:09:51.049 read: IOPS=2001, BW=8008KiB/s (8200kB/s)(8016KiB/1001msec) 00:09:51.049 slat (nsec): min=11906, max=47943, avg=15213.38, stdev=2811.33 00:09:51.049 clat (usec): min=164, max=1471, avg=264.58, stdev=37.27 00:09:51.049 lat (usec): min=177, max=1484, avg=279.79, stdev=37.39 00:09:51.049 clat percentiles (usec): 00:09:51.049 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 249], 00:09:51.049 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:09:51.049 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:09:51.049 | 99.00th=[ 355], 99.50th=[ 449], 99.90th=[ 498], 99.95th=[ 627], 00:09:51.049 | 99.99th=[ 1467] 00:09:51.049 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:51.049 slat (usec): min=17, max=108, avg=23.48, stdev= 6.15 00:09:51.049 clat (usec): min=98, max=2114, avg=187.45, stdev=48.79 00:09:51.049 lat (usec): min=117, max=2146, avg=210.93, stdev=49.51 00:09:51.049 clat percentiles (usec): 00:09:51.049 | 1.00th=[ 117], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 174], 00:09:51.049 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:51.049 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 215], 00:09:51.049 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 486], 99.95th=[ 668], 00:09:51.049 | 99.99th=[ 2114] 00:09:51.049 bw ( KiB/s): min= 8192, max= 8192, per=20.16%, avg=8192.00, stdev= 0.00, samples=1 00:09:51.049 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:51.049 lat (usec) : 100=0.02%, 250=61.25%, 500=38.62%, 750=0.05% 00:09:51.049 lat (msec) : 2=0.02%, 4=0.02% 00:09:51.049 cpu : usr=2.10%, sys=5.80%, ctx=4053, majf=0, minf=7 00:09:51.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.050 issued rwts: total=2004,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.050 job2: (groupid=0, jobs=1): err= 0: pid=66822: Tue Oct 8 09:15:42 2024 00:09:51.050 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:51.050 slat (nsec): min=12453, max=57661, avg=16044.80, stdev=4136.35 00:09:51.050 clat (usec): min=154, max=1926, avg=182.25, stdev=38.85 00:09:51.050 lat (usec): min=167, max=1940, avg=198.29, stdev=39.47 00:09:51.050 clat percentiles (usec): 00:09:51.050 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:09:51.050 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:09:51.050 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 206], 00:09:51.050 | 99.00th=[ 221], 99.50th=[ 269], 99.90th=[ 379], 99.95th=[ 644], 00:09:51.050 | 99.99th=[ 1926] 00:09:51.050 write: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:09:51.050 slat (nsec): min=14793, max=89416, avg=23908.94, stdev=6264.12 00:09:51.050 clat (usec): min=104, max=513, avg=136.35, stdev=18.77 00:09:51.050 lat (usec): min=122, max=539, avg=160.26, stdev=20.24 00:09:51.050 clat percentiles (usec): 00:09:51.050 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 126], 00:09:51.050 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:51.050 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 159], 00:09:51.050 | 99.00th=[ 186], 99.50th=[ 219], 99.90th=[ 424], 99.95th=[ 498], 00:09:51.050 | 99.99th=[ 515] 00:09:51.050 bw ( KiB/s): min=12288, max=12288, per=30.24%, avg=12288.00, stdev= 0.00, samples=1 00:09:51.050 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:51.050 lat (usec) : 250=99.59%, 500=0.36%, 750=0.04% 00:09:51.050 lat (msec) : 2=0.02% 00:09:51.050 cpu : usr=2.60%, sys=8.80%, ctx=5568, majf=0, minf=7 00:09:51.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.050 issued rwts: total=2560,3000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.050 job3: (groupid=0, jobs=1): err= 0: pid=66823: Tue Oct 8 09:15:42 2024 00:09:51.050 read: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:09:51.050 slat (nsec): min=11923, max=34282, avg=13893.70, stdev=1983.89 00:09:51.050 clat (usec): min=147, max=411, avg=178.37, stdev=12.79 00:09:51.050 lat (usec): min=160, max=424, avg=192.27, stdev=13.05 00:09:51.050 clat percentiles (usec): 00:09:51.050 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:09:51.050 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:09:51.050 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:09:51.050 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 221], 99.95th=[ 221], 00:09:51.050 | 99.99th=[ 412] 00:09:51.050 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:51.050 slat (nsec): min=17970, max=70632, avg=21447.60, stdev=4436.66 00:09:51.050 clat (usec): min=105, max=1529, avg=134.30, stdev=31.66 00:09:51.050 lat (usec): min=124, max=1548, avg=155.74, stdev=32.07 00:09:51.050 clat percentiles (usec): 00:09:51.050 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:09:51.050 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 135], 00:09:51.050 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 153], 00:09:51.050 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 445], 99.95th=[ 758], 00:09:51.050 | 99.99th=[ 1532] 00:09:51.050 bw ( KiB/s): min=12288, max=12288, per=30.24%, avg=12288.00, stdev= 0.00, samples=1 00:09:51.050 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:51.050 lat (usec) : 250=99.90%, 500=0.05%, 750=0.02%, 1000=0.02% 00:09:51.050 lat (msec) : 2=0.02% 00:09:51.050 cpu : usr=2.20%, sys=8.10%, ctx=5731, majf=0, minf=15 00:09:51.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.050 issued rwts: total=2659,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.050 00:09:51.050 Run status group 0 (all jobs): 00:09:51.050 READ: bw=36.0MiB/s (37.8MB/s), 8008KiB/s-10.4MiB/s (8200kB/s-10.9MB/s), io=36.1MiB (37.8MB), run=1001-1001msec 00:09:51.050 WRITE: bw=39.7MiB/s (41.6MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.7MiB (41.6MB), run=1001-1001msec 00:09:51.050 00:09:51.050 Disk stats (read/write): 00:09:51.050 nvme0n1: ios=1586/2026, merge=0/0, ticks=446/387, in_queue=833, util=87.98% 00:09:51.050 nvme0n2: ios=1580/2028, merge=0/0, ticks=459/389, in_queue=848, util=88.97% 00:09:51.050 nvme0n3: ios=2228/2560, merge=0/0, ticks=418/371, in_queue=789, util=89.26% 00:09:51.050 nvme0n4: ios=2352/2560, merge=0/0, ticks=432/369, in_queue=801, util=89.81% 00:09:51.050 09:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:51.050 [global] 00:09:51.050 thread=1 00:09:51.050 invalidate=1 00:09:51.050 rw=randwrite 00:09:51.050 time_based=1 00:09:51.050 runtime=1 00:09:51.050 ioengine=libaio 00:09:51.050 direct=1 00:09:51.050 bs=4096 00:09:51.050 iodepth=1 00:09:51.050 norandommap=0 00:09:51.050 numjobs=1 00:09:51.050 00:09:51.050 verify_dump=1 00:09:51.050 verify_backlog=512 00:09:51.050 verify_state_save=0 00:09:51.050 do_verify=1 00:09:51.050 verify=crc32c-intel 00:09:51.050 [job0] 00:09:51.050 filename=/dev/nvme0n1 00:09:51.050 [job1] 00:09:51.050 filename=/dev/nvme0n2 00:09:51.050 [job2] 00:09:51.050 filename=/dev/nvme0n3 00:09:51.050 [job3] 00:09:51.050 filename=/dev/nvme0n4 00:09:51.050 Could not set queue depth (nvme0n1) 00:09:51.050 Could not set queue depth (nvme0n2) 00:09:51.050 Could not set queue depth (nvme0n3) 00:09:51.050 Could not set queue depth (nvme0n4) 00:09:51.050 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.050 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.050 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.050 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.050 fio-3.35 00:09:51.050 Starting 4 threads 00:09:52.424 00:09:52.424 job0: (groupid=0, jobs=1): err= 0: pid=66876: Tue Oct 8 09:15:43 2024 00:09:52.424 read: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1001msec) 00:09:52.424 slat (nsec): min=11275, max=31195, avg=13215.26, stdev=1800.69 00:09:52.424 clat (usec): min=135, max=1502, avg=167.00, stdev=27.23 00:09:52.424 lat (usec): min=147, max=1524, avg=180.21, stdev=27.44 00:09:52.424 clat percentiles (usec): 00:09:52.424 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:09:52.424 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:09:52.424 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:09:52.424 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 255], 99.95th=[ 297], 00:09:52.424 | 99.99th=[ 1500] 00:09:52.424 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:52.424 slat (usec): min=15, max=111, avg=19.95, stdev= 4.40 00:09:52.424 clat (usec): min=68, max=694, avg=125.42, stdev=18.65 00:09:52.424 lat (usec): min=115, max=712, avg=145.37, stdev=19.25 00:09:52.424 clat percentiles (usec): 00:09:52.424 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 117], 00:09:52.424 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 127], 00:09:52.424 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:09:52.424 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 383], 99.95th=[ 611], 00:09:52.424 | 99.99th=[ 693] 00:09:52.424 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:52.424 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:52.424 lat (usec) : 100=0.08%, 250=99.75%, 500=0.12%, 750=0.03% 00:09:52.424 lat (msec) : 2=0.02% 00:09:52.424 cpu : usr=1.90%, sys=8.40%, ctx=6083, majf=0, minf=13 00:09:52.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.424 issued rwts: total=3009,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.424 job1: (groupid=0, jobs=1): err= 0: pid=66877: Tue Oct 8 09:15:43 2024 00:09:52.424 read: IOPS=2992, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec) 00:09:52.424 slat (nsec): min=11375, max=30797, avg=13289.24, stdev=1808.63 00:09:52.424 clat (usec): min=137, max=387, avg=167.70, stdev=15.89 00:09:52.424 lat (usec): min=149, max=405, avg=180.99, stdev=16.18 00:09:52.424 clat percentiles (usec): 00:09:52.424 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:09:52.424 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:09:52.424 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:09:52.424 | 99.00th=[ 212], 99.50th=[ 255], 99.90th=[ 359], 99.95th=[ 367], 00:09:52.424 | 99.99th=[ 388] 00:09:52.424 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:52.424 slat (nsec): min=13410, max=97115, avg=19697.49, stdev=3762.19 00:09:52.424 clat (usec): min=93, max=1527, avg=125.90, stdev=29.57 00:09:52.424 lat (usec): min=111, max=1546, avg=145.59, stdev=30.02 00:09:52.424 clat percentiles (usec): 00:09:52.424 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 116], 00:09:52.424 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:09:52.424 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:09:52.424 | 99.00th=[ 161], 99.50th=[ 182], 99.90th=[ 322], 99.95th=[ 416], 00:09:52.424 | 99.99th=[ 1532] 00:09:52.424 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:52.424 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:52.424 lat (usec) : 100=0.33%, 250=99.32%, 500=0.33% 00:09:52.424 lat (msec) : 2=0.02% 00:09:52.424 cpu : usr=2.20%, sys=8.00%, ctx=6067, majf=0, minf=7 00:09:52.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.424 issued rwts: total=2995,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.424 job2: (groupid=0, jobs=1): err= 0: pid=66878: Tue Oct 8 09:15:43 2024 00:09:52.424 read: IOPS=2571, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1001msec) 00:09:52.424 slat (nsec): min=11833, max=38111, avg=13949.45, stdev=1760.05 00:09:52.424 clat (usec): min=149, max=243, avg=180.73, stdev=12.64 00:09:52.424 lat (usec): min=163, max=256, avg=194.68, stdev=12.84 00:09:52.424 clat percentiles (usec): 00:09:52.424 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:09:52.424 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:09:52.424 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:09:52.424 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 233], 99.95th=[ 239], 00:09:52.424 | 99.99th=[ 243] 00:09:52.424 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:52.425 slat (nsec): min=14597, max=86810, avg=20851.60, stdev=4227.82 00:09:52.425 clat (usec): min=104, max=758, avg=138.11, stdev=19.51 00:09:52.425 lat (usec): min=123, max=790, avg=158.96, stdev=20.26 00:09:52.425 clat percentiles (usec): 00:09:52.425 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 129], 00:09:52.425 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:09:52.425 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 157], 00:09:52.425 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 396], 99.95th=[ 486], 00:09:52.425 | 99.99th=[ 758] 00:09:52.425 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:52.425 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:52.425 lat (usec) : 250=99.84%, 500=0.14%, 1000=0.02% 00:09:52.425 cpu : usr=2.50%, sys=7.50%, ctx=5646, majf=0, minf=19 00:09:52.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.425 issued rwts: total=2574,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.425 job3: (groupid=0, jobs=1): err= 0: pid=66879: Tue Oct 8 09:15:43 2024 00:09:52.425 read: IOPS=2634, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:52.425 slat (nsec): min=11428, max=28936, avg=12721.71, stdev=1439.40 00:09:52.425 clat (usec): min=149, max=243, avg=177.34, stdev=11.88 00:09:52.425 lat (usec): min=161, max=256, avg=190.07, stdev=11.93 00:09:52.425 clat percentiles (usec): 00:09:52.425 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:09:52.425 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:09:52.425 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 198], 00:09:52.425 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 235], 99.95th=[ 239], 00:09:52.425 | 99.99th=[ 245] 00:09:52.425 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:52.425 slat (nsec): min=14271, max=93875, avg=20465.61, stdev=4364.81 00:09:52.425 clat (usec): min=110, max=1558, avg=138.65, stdev=28.88 00:09:52.425 lat (usec): min=129, max=1578, avg=159.11, stdev=29.36 00:09:52.425 clat percentiles (usec): 00:09:52.425 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 129], 00:09:52.425 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:09:52.425 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 157], 00:09:52.425 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 310], 99.95th=[ 412], 00:09:52.425 | 99.99th=[ 1565] 00:09:52.425 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:52.425 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:52.425 lat (usec) : 250=99.93%, 500=0.05% 00:09:52.425 lat (msec) : 2=0.02% 00:09:52.425 cpu : usr=2.60%, sys=7.20%, ctx=5709, majf=0, minf=7 00:09:52.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.425 issued rwts: total=2637,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.425 00:09:52.425 Run status group 0 (all jobs): 00:09:52.425 READ: bw=43.8MiB/s (45.9MB/s), 10.0MiB/s-11.7MiB/s (10.5MB/s-12.3MB/s), io=43.8MiB (45.9MB), run=1001-1001msec 00:09:52.425 WRITE: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:09:52.425 00:09:52.425 Disk stats (read/write): 00:09:52.425 nvme0n1: ios=2610/2650, merge=0/0, ticks=447/359, in_queue=806, util=87.46% 00:09:52.425 nvme0n2: ios=2590/2627, merge=0/0, ticks=474/359, in_queue=833, util=88.50% 00:09:52.425 nvme0n3: ios=2256/2560, merge=0/0, ticks=417/382, in_queue=799, util=89.20% 00:09:52.425 nvme0n4: ios=2318/2560, merge=0/0, ticks=414/381, in_queue=795, util=89.76% 00:09:52.425 09:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:52.425 [global] 00:09:52.425 thread=1 00:09:52.425 invalidate=1 00:09:52.425 rw=write 00:09:52.425 time_based=1 00:09:52.425 runtime=1 00:09:52.425 ioengine=libaio 00:09:52.425 direct=1 00:09:52.425 bs=4096 00:09:52.425 iodepth=128 00:09:52.425 norandommap=0 00:09:52.425 numjobs=1 00:09:52.425 00:09:52.425 verify_dump=1 00:09:52.425 verify_backlog=512 00:09:52.425 verify_state_save=0 00:09:52.425 do_verify=1 00:09:52.425 verify=crc32c-intel 00:09:52.425 [job0] 00:09:52.425 filename=/dev/nvme0n1 00:09:52.425 [job1] 00:09:52.425 filename=/dev/nvme0n2 00:09:52.425 [job2] 00:09:52.425 filename=/dev/nvme0n3 00:09:52.425 [job3] 00:09:52.425 filename=/dev/nvme0n4 00:09:52.425 Could not set queue depth (nvme0n1) 00:09:52.425 Could not set queue depth (nvme0n2) 00:09:52.425 Could not set queue depth (nvme0n3) 00:09:52.425 Could not set queue depth (nvme0n4) 00:09:52.425 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.425 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.425 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.425 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.425 fio-3.35 00:09:52.425 Starting 4 threads 00:09:53.798 00:09:53.798 job0: (groupid=0, jobs=1): err= 0: pid=66934: Tue Oct 8 09:15:45 2024 00:09:53.798 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:09:53.798 slat (usec): min=8, max=6633, avg=183.58, stdev=678.98 00:09:53.798 clat (usec): min=14147, max=36059, avg=23033.56, stdev=4170.31 00:09:53.798 lat (usec): min=15442, max=40166, avg=23217.15, stdev=4195.24 00:09:53.798 clat percentiles (usec): 00:09:53.798 | 1.00th=[16450], 5.00th=[18220], 10.00th=[19006], 20.00th=[20579], 00:09:53.798 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21890], 00:09:53.798 | 70.00th=[22676], 80.00th=[26084], 90.00th=[30802], 95.00th=[31589], 00:09:53.798 | 99.00th=[33162], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:09:53.798 | 99.99th=[35914] 00:09:53.798 write: IOPS=2926, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1007msec); 0 zone resets 00:09:53.798 slat (usec): min=4, max=8040, avg=172.35, stdev=772.11 00:09:53.798 clat (usec): min=6474, max=39638, avg=23077.71, stdev=5007.03 00:09:53.798 lat (usec): min=7031, max=39661, avg=23250.06, stdev=4988.24 00:09:53.798 clat percentiles (usec): 00:09:53.798 | 1.00th=[12911], 5.00th=[15664], 10.00th=[18482], 20.00th=[19792], 00:09:53.798 | 30.00th=[20055], 40.00th=[21365], 50.00th=[22414], 60.00th=[22938], 00:09:53.798 | 70.00th=[23725], 80.00th=[28705], 90.00th=[30016], 95.00th=[31327], 00:09:53.798 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:09:53.798 | 99.99th=[39584] 00:09:53.798 bw ( KiB/s): min=10472, max=12088, per=23.57%, avg=11280.00, stdev=1142.68, samples=2 00:09:53.798 iops : min= 2618, max= 3022, avg=2820.00, stdev=285.67, samples=2 00:09:53.798 lat (msec) : 10=0.29%, 20=21.19%, 50=78.52% 00:09:53.798 cpu : usr=2.68%, sys=7.55%, ctx=664, majf=0, minf=11 00:09:53.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:53.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.798 issued rwts: total=2560,2947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.798 job1: (groupid=0, jobs=1): err= 0: pid=66935: Tue Oct 8 09:15:45 2024 00:09:53.798 read: IOPS=2550, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:09:53.798 slat (usec): min=6, max=8122, avg=174.53, stdev=904.67 00:09:53.798 clat (usec): min=522, max=32149, avg=22757.43, stdev=4145.11 00:09:53.798 lat (usec): min=5100, max=32165, avg=22931.97, stdev=4068.41 00:09:53.798 clat percentiles (usec): 00:09:53.798 | 1.00th=[16188], 5.00th=[20317], 10.00th=[20317], 20.00th=[20579], 00:09:53.798 | 30.00th=[20841], 40.00th=[20841], 50.00th=[21103], 60.00th=[21103], 00:09:53.798 | 70.00th=[21365], 80.00th=[21627], 90.00th=[30802], 95.00th=[31327], 00:09:53.798 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:09:53.798 | 99.99th=[32113] 00:09:53.798 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:53.798 slat (usec): min=7, max=9459, avg=173.74, stdev=873.05 00:09:53.798 clat (usec): min=5104, max=32596, avg=21965.57, stdev=4620.44 00:09:53.798 lat (usec): min=5119, max=32621, avg=22139.30, stdev=4573.56 00:09:53.798 clat percentiles (usec): 00:09:53.798 | 1.00th=[ 5669], 5.00th=[16188], 10.00th=[19530], 20.00th=[19792], 00:09:53.798 | 30.00th=[20055], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:09:53.798 | 70.00th=[20841], 80.00th=[28181], 90.00th=[29754], 95.00th=[31065], 00:09:53.798 | 99.00th=[32375], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:09:53.798 | 99.99th=[32637] 00:09:53.798 bw ( KiB/s): min=10781, max=12800, per=24.64%, avg=11790.50, stdev=1427.65, samples=2 00:09:53.798 iops : min= 2695, max= 3200, avg=2947.50, stdev=357.09, samples=2 00:09:53.798 lat (usec) : 750=0.02% 00:09:53.798 lat (msec) : 10=0.57%, 20=17.26%, 50=82.16% 00:09:53.798 cpu : usr=2.69%, sys=8.77%, ctx=177, majf=0, minf=11 00:09:53.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:53.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.798 issued rwts: total=2561,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.798 job2: (groupid=0, jobs=1): err= 0: pid=66936: Tue Oct 8 09:15:45 2024 00:09:53.798 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:53.798 slat (usec): min=8, max=7890, avg=183.08, stdev=705.44 00:09:53.798 clat (usec): min=13697, max=36104, avg=22755.81, stdev=4211.45 00:09:53.798 lat (usec): min=13710, max=38235, avg=22938.89, stdev=4227.39 00:09:53.798 clat percentiles (usec): 00:09:53.798 | 1.00th=[15401], 5.00th=[17171], 10.00th=[18744], 20.00th=[20579], 00:09:53.798 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:09:53.798 | 70.00th=[22152], 80.00th=[25035], 90.00th=[30802], 95.00th=[31065], 00:09:53.798 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:09:53.798 | 99.99th=[35914] 00:09:53.798 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1005msec); 0 zone resets 00:09:53.798 slat (usec): min=6, max=8089, avg=172.37, stdev=782.83 00:09:53.798 clat (usec): min=4439, max=35584, avg=23059.20, stdev=4538.77 00:09:53.798 lat (usec): min=7083, max=35642, avg=23231.57, stdev=4525.51 00:09:53.798 clat percentiles (usec): 00:09:53.798 | 1.00th=[10814], 5.00th=[17171], 10.00th=[18744], 20.00th=[19792], 00:09:53.798 | 30.00th=[20317], 40.00th=[21890], 50.00th=[22676], 60.00th=[23200], 00:09:53.798 | 70.00th=[23462], 80.00th=[26870], 90.00th=[29754], 95.00th=[31065], 00:09:53.798 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:09:53.798 | 99.99th=[35390] 00:09:53.798 bw ( KiB/s): min=10880, max=11760, per=23.65%, avg=11320.00, stdev=622.25, samples=2 00:09:53.798 iops : min= 2720, max= 2940, avg=2830.00, stdev=155.56, samples=2 00:09:53.798 lat (msec) : 10=0.34%, 20=19.65%, 50=80.01% 00:09:53.798 cpu : usr=3.09%, sys=7.27%, ctx=651, majf=0, minf=11 00:09:53.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:53.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.798 issued rwts: total=2560,2957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.799 job3: (groupid=0, jobs=1): err= 0: pid=66937: Tue Oct 8 09:15:45 2024 00:09:53.799 read: IOPS=2553, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:09:53.799 slat (usec): min=6, max=8128, avg=174.28, stdev=904.33 00:09:53.799 clat (usec): min=674, max=32561, avg=22834.89, stdev=4160.34 00:09:53.799 lat (usec): min=4627, max=32578, avg=23009.17, stdev=4084.08 00:09:53.799 clat percentiles (usec): 00:09:53.799 | 1.00th=[16188], 5.00th=[20317], 10.00th=[20579], 20.00th=[20579], 00:09:53.799 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21103], 60.00th=[21103], 00:09:53.799 | 70.00th=[21627], 80.00th=[21890], 90.00th=[30802], 95.00th=[31589], 00:09:53.799 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:09:53.799 | 99.99th=[32637] 00:09:53.799 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:53.799 slat (usec): min=10, max=8675, avg=173.97, stdev=874.75 00:09:53.799 clat (usec): min=4631, max=31636, avg=21861.86, stdev=4628.09 00:09:53.799 lat (usec): min=4646, max=31662, avg=22035.83, stdev=4581.20 00:09:53.799 clat percentiles (usec): 00:09:53.799 | 1.00th=[ 5211], 5.00th=[16319], 10.00th=[19530], 20.00th=[19792], 00:09:53.799 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:09:53.799 | 70.00th=[20841], 80.00th=[27919], 90.00th=[29754], 95.00th=[31065], 00:09:53.799 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:09:53.799 | 99.99th=[31589] 00:09:53.799 bw ( KiB/s): min=10760, max=12800, per=24.61%, avg=11780.00, stdev=1442.50, samples=2 00:09:53.799 iops : min= 2690, max= 3200, avg=2945.00, stdev=360.62, samples=2 00:09:53.799 lat (usec) : 750=0.02% 00:09:53.799 lat (msec) : 10=0.78%, 20=22.01%, 50=77.19% 00:09:53.799 cpu : usr=2.00%, sys=7.68%, ctx=179, majf=0, minf=19 00:09:53.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:53.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.799 issued rwts: total=2561,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.799 00:09:53.799 Run status group 0 (all jobs): 00:09:53.799 READ: bw=39.7MiB/s (41.7MB/s), 9.93MiB/s-9.97MiB/s (10.4MB/s-10.5MB/s), io=40.0MiB (42.0MB), run=1003-1007msec 00:09:53.799 WRITE: bw=46.7MiB/s (49.0MB/s), 11.4MiB/s-12.0MiB/s (12.0MB/s-12.5MB/s), io=47.1MiB (49.3MB), run=1003-1007msec 00:09:53.799 00:09:53.799 Disk stats (read/write): 00:09:53.799 nvme0n1: ios=2187/2560, merge=0/0, ticks=16161/17374, in_queue=33535, util=88.98% 00:09:53.799 nvme0n2: ios=2289/2560, merge=0/0, ticks=12261/13595, in_queue=25856, util=89.91% 00:09:53.799 nvme0n3: ios=2163/2560, merge=0/0, ticks=16260/17179, in_queue=33439, util=89.36% 00:09:53.799 nvme0n4: ios=2261/2560, merge=0/0, ticks=10891/11548, in_queue=22439, util=89.72% 00:09:53.799 09:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:53.799 [global] 00:09:53.799 thread=1 00:09:53.799 invalidate=1 00:09:53.799 rw=randwrite 00:09:53.799 time_based=1 00:09:53.799 runtime=1 00:09:53.799 ioengine=libaio 00:09:53.799 direct=1 00:09:53.799 bs=4096 00:09:53.799 iodepth=128 00:09:53.799 norandommap=0 00:09:53.799 numjobs=1 00:09:53.799 00:09:53.799 verify_dump=1 00:09:53.799 verify_backlog=512 00:09:53.799 verify_state_save=0 00:09:53.799 do_verify=1 00:09:53.799 verify=crc32c-intel 00:09:53.799 [job0] 00:09:53.799 filename=/dev/nvme0n1 00:09:53.799 [job1] 00:09:53.799 filename=/dev/nvme0n2 00:09:53.799 [job2] 00:09:53.799 filename=/dev/nvme0n3 00:09:53.799 [job3] 00:09:53.799 filename=/dev/nvme0n4 00:09:53.799 Could not set queue depth (nvme0n1) 00:09:53.799 Could not set queue depth (nvme0n2) 00:09:53.799 Could not set queue depth (nvme0n3) 00:09:53.799 Could not set queue depth (nvme0n4) 00:09:53.799 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.799 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.799 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.799 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.799 fio-3.35 00:09:53.799 Starting 4 threads 00:09:55.181 00:09:55.181 job0: (groupid=0, jobs=1): err= 0: pid=66996: Tue Oct 8 09:15:46 2024 00:09:55.181 read: IOPS=2751, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1003msec) 00:09:55.181 slat (usec): min=5, max=10219, avg=181.24, stdev=773.95 00:09:55.181 clat (usec): min=1703, max=33911, avg=22728.89, stdev=3937.43 00:09:55.181 lat (usec): min=2655, max=35685, avg=22910.13, stdev=3969.63 00:09:55.181 clat percentiles (usec): 00:09:55.181 | 1.00th=[ 4686], 5.00th=[15926], 10.00th=[18744], 20.00th=[21365], 00:09:55.181 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:09:55.181 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[28705], 00:09:55.181 | 99.00th=[31851], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:09:55.181 | 99.99th=[33817] 00:09:55.181 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:55.181 slat (usec): min=5, max=11037, avg=155.18, stdev=740.39 00:09:55.181 clat (usec): min=9587, max=33665, avg=20855.65, stdev=4453.69 00:09:55.181 lat (usec): min=9610, max=33897, avg=21010.83, stdev=4448.22 00:09:55.181 clat percentiles (usec): 00:09:55.181 | 1.00th=[11338], 5.00th=[11863], 10.00th=[12518], 20.00th=[17695], 00:09:55.181 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[22152], 00:09:55.181 | 70.00th=[22676], 80.00th=[23725], 90.00th=[25035], 95.00th=[26870], 00:09:55.181 | 99.00th=[31851], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:09:55.181 | 99.99th=[33817] 00:09:55.181 bw ( KiB/s): min=12288, max=12312, per=18.93%, avg=12300.00, stdev=16.97, samples=2 00:09:55.181 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:55.181 lat (msec) : 2=0.02%, 4=0.12%, 10=0.93%, 20=19.72%, 50=79.22% 00:09:55.181 cpu : usr=2.79%, sys=8.38%, ctx=610, majf=0, minf=15 00:09:55.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:55.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.181 issued rwts: total=2760,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.181 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.181 job1: (groupid=0, jobs=1): err= 0: pid=66997: Tue Oct 8 09:15:46 2024 00:09:55.181 read: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1003msec) 00:09:55.181 slat (usec): min=5, max=8189, avg=183.44, stdev=703.70 00:09:55.181 clat (usec): min=2210, max=33380, avg=23412.26, stdev=3161.70 00:09:55.181 lat (usec): min=8523, max=33462, avg=23595.69, stdev=3161.49 00:09:55.181 clat percentiles (usec): 00:09:55.181 | 1.00th=[10814], 5.00th=[17695], 10.00th=[20317], 20.00th=[21890], 00:09:55.181 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:09:55.181 | 70.00th=[24249], 80.00th=[24773], 90.00th=[26346], 95.00th=[28181], 00:09:55.181 | 99.00th=[31851], 99.50th=[32113], 99.90th=[32637], 99.95th=[32637], 00:09:55.181 | 99.99th=[33424] 00:09:55.181 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:55.181 slat (usec): min=3, max=12245, avg=157.62, stdev=779.87 00:09:55.181 clat (usec): min=7179, max=33301, avg=20917.02, stdev=4284.78 00:09:55.181 lat (usec): min=9013, max=33318, avg=21074.64, stdev=4298.66 00:09:55.181 clat percentiles (usec): 00:09:55.181 | 1.00th=[ 9110], 5.00th=[12125], 10.00th=[14484], 20.00th=[18220], 00:09:55.181 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21890], 60.00th=[22414], 00:09:55.181 | 70.00th=[22938], 80.00th=[23725], 90.00th=[24773], 95.00th=[25822], 00:09:55.181 | 99.00th=[31851], 99.50th=[32113], 99.90th=[32637], 99.95th=[33162], 00:09:55.181 | 99.99th=[33424] 00:09:55.181 bw ( KiB/s): min=12288, max=12288, per=18.92%, avg=12288.00, stdev= 0.00, samples=2 00:09:55.181 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:55.181 lat (msec) : 4=0.02%, 10=1.46%, 20=16.90%, 50=81.63% 00:09:55.181 cpu : usr=1.90%, sys=8.78%, ctx=636, majf=0, minf=11 00:09:55.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:55.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.181 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.182 job2: (groupid=0, jobs=1): err= 0: pid=66998: Tue Oct 8 09:15:46 2024 00:09:55.182 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:55.182 slat (usec): min=9, max=6554, avg=98.39, stdev=612.25 00:09:55.182 clat (usec): min=8460, max=21335, avg=13806.81, stdev=1488.09 00:09:55.182 lat (usec): min=8475, max=25380, avg=13905.20, stdev=1512.57 00:09:55.182 clat percentiles (usec): 00:09:55.182 | 1.00th=[ 8848], 5.00th=[12518], 10.00th=[13042], 20.00th=[13304], 00:09:55.182 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:09:55.182 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:09:55.182 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:09:55.182 | 99.99th=[21365] 00:09:55.182 write: IOPS=5034, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1004msec); 0 zone resets 00:09:55.182 slat (usec): min=3, max=12410, avg=100.38, stdev=602.18 00:09:55.182 clat (usec): min=561, max=21551, avg=12591.30, stdev=1691.42 00:09:55.182 lat (usec): min=5003, max=21593, avg=12691.68, stdev=1609.28 00:09:55.182 clat percentiles (usec): 00:09:55.182 | 1.00th=[ 6325], 5.00th=[10683], 10.00th=[11338], 20.00th=[11731], 00:09:55.182 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:09:55.182 | 70.00th=[13042], 80.00th=[13304], 90.00th=[14091], 95.00th=[14615], 00:09:55.182 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21627], 00:09:55.182 | 99.99th=[21627] 00:09:55.182 bw ( KiB/s): min=18936, max=20480, per=30.34%, avg=19708.00, stdev=1091.77, samples=2 00:09:55.182 iops : min= 4734, max= 5120, avg=4927.00, stdev=272.94, samples=2 00:09:55.182 lat (usec) : 750=0.01% 00:09:55.182 lat (msec) : 10=3.89%, 20=94.71%, 50=1.39% 00:09:55.182 cpu : usr=5.98%, sys=12.56%, ctx=203, majf=0, minf=15 00:09:55.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:55.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.182 issued rwts: total=4608,5055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.182 job3: (groupid=0, jobs=1): err= 0: pid=66999: Tue Oct 8 09:15:46 2024 00:09:55.182 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:55.182 slat (usec): min=5, max=5421, avg=103.39, stdev=464.74 00:09:55.182 clat (usec): min=9502, max=19579, avg=13628.18, stdev=996.63 00:09:55.182 lat (usec): min=10124, max=19610, avg=13731.57, stdev=1016.69 00:09:55.182 clat percentiles (usec): 00:09:55.182 | 1.00th=[10683], 5.00th=[11863], 10.00th=[12387], 20.00th=[13042], 00:09:55.182 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:09:55.182 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[15139], 00:09:55.182 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:09:55.182 | 99.99th=[19530] 00:09:55.182 write: IOPS=5085, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1004msec); 0 zone resets 00:09:55.182 slat (usec): min=10, max=5936, avg=94.33, stdev=552.76 00:09:55.182 clat (usec): min=3854, max=19973, avg=12560.42, stdev=1540.31 00:09:55.182 lat (usec): min=3874, max=20018, avg=12654.75, stdev=1627.40 00:09:55.182 clat percentiles (usec): 00:09:55.182 | 1.00th=[ 5145], 5.00th=[10421], 10.00th=[11338], 20.00th=[11994], 00:09:55.182 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:09:55.182 | 70.00th=[12911], 80.00th=[13173], 90.00th=[14091], 95.00th=[14615], 00:09:55.182 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19006], 99.95th=[19792], 00:09:55.182 | 99.99th=[20055] 00:09:55.182 bw ( KiB/s): min=19352, max=20521, per=30.69%, avg=19936.50, stdev=826.61, samples=2 00:09:55.182 iops : min= 4838, max= 5130, avg=4984.00, stdev=206.48, samples=2 00:09:55.182 lat (msec) : 4=0.07%, 10=2.03%, 20=97.90% 00:09:55.182 cpu : usr=4.49%, sys=14.16%, ctx=293, majf=0, minf=9 00:09:55.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:55.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.182 issued rwts: total=4608,5106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.182 00:09:55.182 Run status group 0 (all jobs): 00:09:55.182 READ: bw=57.0MiB/s (59.8MB/s), 10.5MiB/s-17.9MiB/s (11.0MB/s-18.8MB/s), io=57.3MiB (60.1MB), run=1003-1004msec 00:09:55.182 WRITE: bw=63.4MiB/s (66.5MB/s), 12.0MiB/s-19.9MiB/s (12.5MB/s-20.8MB/s), io=63.7MiB (66.8MB), run=1003-1004msec 00:09:55.182 00:09:55.182 Disk stats (read/write): 00:09:55.182 nvme0n1: ios=2454/2560, merge=0/0, ticks=27229/23472, in_queue=50701, util=87.27% 00:09:55.182 nvme0n2: ios=2412/2560, merge=0/0, ticks=27834/26393, in_queue=54227, util=88.88% 00:09:55.182 nvme0n3: ios=4048/4096, merge=0/0, ticks=53030/48128, in_queue=101158, util=89.22% 00:09:55.182 nvme0n4: ios=4088/4109, merge=0/0, ticks=26814/21364, in_queue=48178, util=89.68% 00:09:55.182 09:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:55.182 09:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67017 00:09:55.182 09:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:55.182 09:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:55.182 [global] 00:09:55.182 thread=1 00:09:55.182 invalidate=1 00:09:55.182 rw=read 00:09:55.182 time_based=1 00:09:55.182 runtime=10 00:09:55.182 ioengine=libaio 00:09:55.182 direct=1 00:09:55.182 bs=4096 00:09:55.182 iodepth=1 00:09:55.182 norandommap=1 00:09:55.182 numjobs=1 00:09:55.182 00:09:55.182 [job0] 00:09:55.182 filename=/dev/nvme0n1 00:09:55.182 [job1] 00:09:55.182 filename=/dev/nvme0n2 00:09:55.182 [job2] 00:09:55.182 filename=/dev/nvme0n3 00:09:55.182 [job3] 00:09:55.182 filename=/dev/nvme0n4 00:09:55.182 Could not set queue depth (nvme0n1) 00:09:55.182 Could not set queue depth (nvme0n2) 00:09:55.182 Could not set queue depth (nvme0n3) 00:09:55.182 Could not set queue depth (nvme0n4) 00:09:55.182 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.182 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.182 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.182 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.182 fio-3.35 00:09:55.182 Starting 4 threads 00:09:58.479 09:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:58.479 fio: pid=67061, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.479 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33505280, buflen=4096 00:09:58.479 09:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:58.831 fio: pid=67060, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.831 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40103936, buflen=4096 00:09:58.831 09:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.831 09:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:58.831 fio: pid=67058, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.831 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2695168, buflen=4096 00:09:59.110 09:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.110 09:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:59.110 fio: pid=67059, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.110 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5722112, buflen=4096 00:09:59.368 00:09:59.368 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67058: Tue Oct 8 09:15:50 2024 00:09:59.368 read: IOPS=4804, BW=18.8MiB/s (19.7MB/s)(66.6MiB/3547msec) 00:09:59.368 slat (usec): min=10, max=13830, avg=15.54, stdev=158.14 00:09:59.368 clat (usec): min=134, max=3344, avg=191.05, stdev=47.24 00:09:59.368 lat (usec): min=145, max=14098, avg=206.59, stdev=165.71 00:09:59.368 clat percentiles (usec): 00:09:59.368 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:59.368 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:09:59.368 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 225], 00:09:59.368 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 562], 99.95th=[ 807], 00:09:59.368 | 99.99th=[ 3130] 00:09:59.368 bw ( KiB/s): min=18752, max=19584, per=35.03%, avg=19329.50, stdev=321.90, samples=6 00:09:59.368 iops : min= 4688, max= 4896, avg=4832.33, stdev=80.50, samples=6 00:09:59.368 lat (usec) : 250=99.34%, 500=0.54%, 750=0.05%, 1000=0.03% 00:09:59.368 lat (msec) : 2=0.02%, 4=0.01% 00:09:59.368 cpu : usr=1.49%, sys=5.70%, ctx=17051, majf=0, minf=1 00:09:59.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 issued rwts: total=17043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.368 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67059: Tue Oct 8 09:15:50 2024 00:09:59.368 read: IOPS=4646, BW=18.1MiB/s (19.0MB/s)(69.5MiB/3827msec) 00:09:59.368 slat (usec): min=10, max=11707, avg=15.90, stdev=165.06 00:09:59.368 clat (usec): min=129, max=2311, avg=197.90, stdev=46.44 00:09:59.368 lat (usec): min=140, max=11997, avg=213.80, stdev=172.98 00:09:59.368 clat percentiles (usec): 00:09:59.368 | 1.00th=[ 149], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:09:59.368 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:09:59.368 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 231], 00:09:59.368 | 99.00th=[ 260], 99.50th=[ 351], 99.90th=[ 857], 99.95th=[ 1123], 00:09:59.368 | 99.99th=[ 1975] 00:09:59.368 bw ( KiB/s): min=17252, max=18938, per=33.55%, avg=18514.00, stdev=588.74, samples=7 00:09:59.368 iops : min= 4313, max= 4734, avg=4628.43, stdev=147.13, samples=7 00:09:59.368 lat (usec) : 250=98.61%, 500=1.11%, 750=0.15%, 1000=0.04% 00:09:59.368 lat (msec) : 2=0.07%, 4=0.01% 00:09:59.368 cpu : usr=1.49%, sys=5.38%, ctx=17793, majf=0, minf=2 00:09:59.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 issued rwts: total=17782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.368 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67060: Tue Oct 8 09:15:50 2024 00:09:59.368 read: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(38.2MiB/3290msec) 00:09:59.368 slat (usec): min=11, max=10223, avg=16.86, stdev=142.81 00:09:59.368 clat (usec): min=151, max=3177, avg=317.54, stdev=65.70 00:09:59.368 lat (usec): min=166, max=10464, avg=334.40, stdev=155.81 00:09:59.368 clat percentiles (usec): 00:09:59.368 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 212], 20.00th=[ 297], 00:09:59.368 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:09:59.368 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 00:09:59.368 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 570], 99.95th=[ 914], 00:09:59.368 | 99.99th=[ 3163] 00:09:59.368 bw ( KiB/s): min=11208, max=11960, per=20.86%, avg=11513.50, stdev=250.37, samples=6 00:09:59.368 iops : min= 2802, max= 2990, avg=2878.33, stdev=62.62, samples=6 00:09:59.368 lat (usec) : 250=15.93%, 500=83.94%, 750=0.06%, 1000=0.04% 00:09:59.368 lat (msec) : 4=0.02% 00:09:59.368 cpu : usr=0.82%, sys=4.17%, ctx=9801, majf=0, minf=2 00:09:59.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 issued rwts: total=9792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.368 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67061: Tue Oct 8 09:15:50 2024 00:09:59.368 read: IOPS=2750, BW=10.7MiB/s (11.3MB/s)(32.0MiB/2974msec) 00:09:59.368 slat (nsec): min=11102, max=68897, avg=15664.90, stdev=4941.84 00:09:59.368 clat (usec): min=183, max=7255, avg=346.15, stdev=176.43 00:09:59.368 lat (usec): min=202, max=7280, avg=361.81, stdev=176.67 00:09:59.368 clat percentiles (usec): 00:09:59.368 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:09:59.368 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:09:59.368 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 00:09:59.368 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 3458], 99.95th=[ 3785], 00:09:59.368 | 99.99th=[ 7242] 00:09:59.368 bw ( KiB/s): min=10576, max=11449, per=19.93%, avg=10998.60, stdev=350.74, samples=5 00:09:59.368 iops : min= 2644, max= 2862, avg=2749.60, stdev=87.61, samples=5 00:09:59.368 lat (usec) : 250=0.37%, 500=99.02%, 750=0.35%, 1000=0.05% 00:09:59.368 lat (msec) : 2=0.02%, 4=0.12%, 10=0.05% 00:09:59.368 cpu : usr=0.84%, sys=4.04%, ctx=8181, majf=0, minf=2 00:09:59.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.368 issued rwts: total=8181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.368 00:09:59.368 Run status group 0 (all jobs): 00:09:59.368 READ: bw=53.9MiB/s (56.5MB/s), 10.7MiB/s-18.8MiB/s (11.3MB/s-19.7MB/s), io=206MiB (216MB), run=2974-3827msec 00:09:59.368 00:09:59.368 Disk stats (read/write): 00:09:59.368 nvme0n1: ios=16124/0, merge=0/0, ticks=3183/0, in_queue=3183, util=95.31% 00:09:59.368 nvme0n2: ios=16692/0, merge=0/0, ticks=3425/0, in_queue=3425, util=95.45% 00:09:59.368 nvme0n3: ios=9039/0, merge=0/0, ticks=3000/0, in_queue=3000, util=96.30% 00:09:59.368 nvme0n4: ios=7905/0, merge=0/0, ticks=2737/0, in_queue=2737, util=96.46% 00:09:59.368 09:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.368 09:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:59.627 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.627 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:59.884 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.884 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:00.142 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.142 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:00.399 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.399 09:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:00.657 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:00.657 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67017 00:10:00.657 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:00.657 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:00.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.915 nvmf hotplug test: fio failed as expected 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:00.915 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.173 rmmod nvme_tcp 00:10:01.173 rmmod nvme_fabrics 00:10:01.173 rmmod nvme_keyring 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 66624 ']' 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 66624 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 66624 ']' 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 66624 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66624 00:10:01.173 killing process with pid 66624 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66624' 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 66624 00:10:01.173 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 66624 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:01.431 09:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:01.431 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:01.431 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:01.431 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.431 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:01.690 00:10:01.690 real 0m20.421s 00:10:01.690 user 1m15.891s 00:10:01.690 sys 0m10.592s 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.690 ************************************ 00:10:01.690 END TEST nvmf_fio_target 00:10:01.690 ************************************ 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.690 ************************************ 00:10:01.690 START TEST nvmf_bdevio 00:10:01.690 ************************************ 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:01.690 * Looking for test storage... 00:10:01.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:01.690 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:01.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.950 --rc genhtml_branch_coverage=1 00:10:01.950 --rc genhtml_function_coverage=1 00:10:01.950 --rc genhtml_legend=1 00:10:01.950 --rc geninfo_all_blocks=1 00:10:01.950 --rc geninfo_unexecuted_blocks=1 00:10:01.950 00:10:01.950 ' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:01.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.950 --rc genhtml_branch_coverage=1 00:10:01.950 --rc genhtml_function_coverage=1 00:10:01.950 --rc genhtml_legend=1 00:10:01.950 --rc geninfo_all_blocks=1 00:10:01.950 --rc geninfo_unexecuted_blocks=1 00:10:01.950 00:10:01.950 ' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:01.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.950 --rc genhtml_branch_coverage=1 00:10:01.950 --rc genhtml_function_coverage=1 00:10:01.950 --rc genhtml_legend=1 00:10:01.950 --rc geninfo_all_blocks=1 00:10:01.950 --rc geninfo_unexecuted_blocks=1 00:10:01.950 00:10:01.950 ' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:01.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.950 --rc genhtml_branch_coverage=1 00:10:01.950 --rc genhtml_function_coverage=1 00:10:01.950 --rc genhtml_legend=1 00:10:01.950 --rc geninfo_all_blocks=1 00:10:01.950 --rc geninfo_unexecuted_blocks=1 00:10:01.950 00:10:01.950 ' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.950 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:01.951 Cannot find device "nvmf_init_br" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:01.951 Cannot find device "nvmf_init_br2" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:01.951 Cannot find device "nvmf_tgt_br" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.951 Cannot find device "nvmf_tgt_br2" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:01.951 Cannot find device "nvmf_init_br" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:01.951 Cannot find device "nvmf_init_br2" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:01.951 Cannot find device "nvmf_tgt_br" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:01.951 Cannot find device "nvmf_tgt_br2" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:01.951 Cannot find device "nvmf_br" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:01.951 Cannot find device "nvmf_init_if" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:01.951 Cannot find device "nvmf_init_if2" 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:01.951 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:01.952 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:01.952 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:02.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:02.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:10:02.211 00:10:02.211 --- 10.0.0.3 ping statistics --- 00:10:02.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.211 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:02.211 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:02.211 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:10:02.211 00:10:02.211 --- 10.0.0.4 ping statistics --- 00:10:02.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.211 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:02.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:02.211 00:10:02.211 --- 10.0.0.1 ping statistics --- 00:10:02.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.211 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:02.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:10:02.211 00:10:02.211 --- 10.0.0.2 ping statistics --- 00:10:02.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.211 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=67390 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 67390 00:10:02.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 67390 ']' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.211 09:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.469 [2024-10-08 09:15:53.922020] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:10:02.469 [2024-10-08 09:15:53.922323] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.469 [2024-10-08 09:15:54.060048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.728 [2024-10-08 09:15:54.156753] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.728 [2024-10-08 09:15:54.157065] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.728 [2024-10-08 09:15:54.157101] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.728 [2024-10-08 09:15:54.157110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.728 [2024-10-08 09:15:54.157117] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.728 [2024-10-08 09:15:54.158404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:02.728 [2024-10-08 09:15:54.158574] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:02.728 [2024-10-08 09:15:54.158705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.728 [2024-10-08 09:15:54.158705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:02.728 [2024-10-08 09:15:54.214915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 [2024-10-08 09:15:54.336517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 Malloc0 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 [2024-10-08 09:15:54.394181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:02.728 { 00:10:02.728 "params": { 00:10:02.728 "name": "Nvme$subsystem", 00:10:02.728 "trtype": "$TEST_TRANSPORT", 00:10:02.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.728 "adrfam": "ipv4", 00:10:02.728 "trsvcid": "$NVMF_PORT", 00:10:02.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.728 "hdgst": ${hdgst:-false}, 00:10:02.728 "ddgst": ${ddgst:-false} 00:10:02.728 }, 00:10:02.728 "method": "bdev_nvme_attach_controller" 00:10:02.728 } 00:10:02.728 EOF 00:10:02.728 )") 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:02.728 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:02.728 "params": { 00:10:02.728 "name": "Nvme1", 00:10:02.728 "trtype": "tcp", 00:10:02.728 "traddr": "10.0.0.3", 00:10:02.728 "adrfam": "ipv4", 00:10:02.728 "trsvcid": "4420", 00:10:02.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.728 "hdgst": false, 00:10:02.728 "ddgst": false 00:10:02.728 }, 00:10:02.728 "method": "bdev_nvme_attach_controller" 00:10:02.728 }' 00:10:02.987 [2024-10-08 09:15:54.452940] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:10:02.987 [2024-10-08 09:15:54.453049] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67424 ] 00:10:02.987 [2024-10-08 09:15:54.594583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:03.245 [2024-10-08 09:15:54.720128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.245 [2024-10-08 09:15:54.720276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.245 [2024-10-08 09:15:54.720283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.245 [2024-10-08 09:15:54.789268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:03.245 I/O targets: 00:10:03.245 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:03.245 00:10:03.245 00:10:03.245 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.245 http://cunit.sourceforge.net/ 00:10:03.245 00:10:03.245 00:10:03.245 Suite: bdevio tests on: Nvme1n1 00:10:03.245 Test: blockdev write read block ...passed 00:10:03.245 Test: blockdev write zeroes read block ...passed 00:10:03.245 Test: blockdev write zeroes read no split ...passed 00:10:03.503 Test: blockdev write zeroes read split ...passed 00:10:03.503 Test: blockdev write zeroes read split partial ...passed 00:10:03.503 Test: blockdev reset ...[2024-10-08 09:15:54.946023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:03.503 [2024-10-08 09:15:54.946165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1040 (9): Bad file descriptor 00:10:03.503 passed 00:10:03.503 Test: blockdev write read 8 blocks ...[2024-10-08 09:15:54.959782] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:03.503 passed 00:10:03.503 Test: blockdev write read size > 128k ...passed 00:10:03.503 Test: blockdev write read invalid size ...passed 00:10:03.503 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:03.503 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:03.503 Test: blockdev write read max offset ...passed 00:10:03.503 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:03.503 Test: blockdev writev readv 8 blocks ...passed 00:10:03.503 Test: blockdev writev readv 30 x 1block ...passed 00:10:03.503 Test: blockdev writev readv block ...passed 00:10:03.503 Test: blockdev writev readv size > 128k ...passed 00:10:03.503 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:03.503 Test: blockdev comparev and writev ...[2024-10-08 09:15:54.970685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.971132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.971170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.971186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.971499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.971522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.971543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.971556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.971861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.971883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.971904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.971917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.972227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.972255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.972277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.503 [2024-10-08 09:15:54.972289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:03.503 passed 00:10:03.503 Test: blockdev nvme passthru rw ...passed 00:10:03.503 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:15:54.973867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.503 [2024-10-08 09:15:54.973912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.974466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.503 [2024-10-08 09:15:54.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.974629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.503 [2024-10-08 09:15:54.974650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:03.503 [2024-10-08 09:15:54.974787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.503 [2024-10-08 09:15:54.974814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:03.503 passed 00:10:03.503 Test: blockdev nvme admin passthru ...passed 00:10:03.503 Test: blockdev copy ...passed 00:10:03.503 00:10:03.503 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.503 suites 1 1 n/a 0 0 00:10:03.503 tests 23 23 23 0 0 00:10:03.503 asserts 152 152 152 0 n/a 00:10:03.503 00:10:03.503 Elapsed time = 0.146 seconds 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.762 rmmod nvme_tcp 00:10:03.762 rmmod nvme_fabrics 00:10:03.762 rmmod nvme_keyring 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 67390 ']' 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 67390 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 67390 ']' 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 67390 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67390 00:10:03.762 killing process with pid 67390 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67390' 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 67390 00:10:03.762 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 67390 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:04.020 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:04.279 00:10:04.279 real 0m2.676s 00:10:04.279 user 0m7.201s 00:10:04.279 sys 0m0.861s 00:10:04.279 ************************************ 00:10:04.279 END TEST nvmf_bdevio 00:10:04.279 ************************************ 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:04.279 ************************************ 00:10:04.279 END TEST nvmf_target_core 00:10:04.279 ************************************ 00:10:04.279 00:10:04.279 real 2m42.232s 00:10:04.279 user 7m3.943s 00:10:04.279 sys 0m54.759s 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.279 09:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.538 09:15:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.538 09:15:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.538 09:15:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.538 09:15:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.538 ************************************ 00:10:04.538 START TEST nvmf_target_extra 00:10:04.538 ************************************ 00:10:04.538 09:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.538 * Looking for test storage... 00:10:04.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.538 --rc genhtml_branch_coverage=1 00:10:04.538 --rc genhtml_function_coverage=1 00:10:04.538 --rc genhtml_legend=1 00:10:04.538 --rc geninfo_all_blocks=1 00:10:04.538 --rc geninfo_unexecuted_blocks=1 00:10:04.538 00:10:04.538 ' 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.538 --rc genhtml_branch_coverage=1 00:10:04.538 --rc genhtml_function_coverage=1 00:10:04.538 --rc genhtml_legend=1 00:10:04.538 --rc geninfo_all_blocks=1 00:10:04.538 --rc geninfo_unexecuted_blocks=1 00:10:04.538 00:10:04.538 ' 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.538 --rc genhtml_branch_coverage=1 00:10:04.538 --rc genhtml_function_coverage=1 00:10:04.538 --rc genhtml_legend=1 00:10:04.538 --rc geninfo_all_blocks=1 00:10:04.538 --rc geninfo_unexecuted_blocks=1 00:10:04.538 00:10:04.538 ' 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.538 --rc genhtml_branch_coverage=1 00:10:04.538 --rc genhtml_function_coverage=1 00:10:04.538 --rc genhtml_legend=1 00:10:04.538 --rc geninfo_all_blocks=1 00:10:04.538 --rc geninfo_unexecuted_blocks=1 00:10:04.538 00:10:04.538 ' 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.538 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.539 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.539 ************************************ 00:10:04.539 START TEST nvmf_auth_target 00:10:04.539 ************************************ 00:10:04.539 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:04.798 * Looking for test storage... 00:10:04.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.798 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.799 --rc genhtml_branch_coverage=1 00:10:04.799 --rc genhtml_function_coverage=1 00:10:04.799 --rc genhtml_legend=1 00:10:04.799 --rc geninfo_all_blocks=1 00:10:04.799 --rc geninfo_unexecuted_blocks=1 00:10:04.799 00:10:04.799 ' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.799 --rc genhtml_branch_coverage=1 00:10:04.799 --rc genhtml_function_coverage=1 00:10:04.799 --rc genhtml_legend=1 00:10:04.799 --rc geninfo_all_blocks=1 00:10:04.799 --rc geninfo_unexecuted_blocks=1 00:10:04.799 00:10:04.799 ' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.799 --rc genhtml_branch_coverage=1 00:10:04.799 --rc genhtml_function_coverage=1 00:10:04.799 --rc genhtml_legend=1 00:10:04.799 --rc geninfo_all_blocks=1 00:10:04.799 --rc geninfo_unexecuted_blocks=1 00:10:04.799 00:10:04.799 ' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.799 --rc genhtml_branch_coverage=1 00:10:04.799 --rc genhtml_function_coverage=1 00:10:04.799 --rc genhtml_legend=1 00:10:04.799 --rc geninfo_all_blocks=1 00:10:04.799 --rc geninfo_unexecuted_blocks=1 00:10:04.799 00:10:04.799 ' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:10:04.799 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:04.800 Cannot find device "nvmf_init_br" 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:04.800 Cannot find device "nvmf_init_br2" 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:04.800 Cannot find device "nvmf_tgt_br" 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:04.800 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:05.058 Cannot find device "nvmf_tgt_br2" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:05.058 Cannot find device "nvmf_init_br" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:05.058 Cannot find device "nvmf_init_br2" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:05.058 Cannot find device "nvmf_tgt_br" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:05.058 Cannot find device "nvmf_tgt_br2" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:05.058 Cannot find device "nvmf_br" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:05.058 Cannot find device "nvmf_init_if" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:05.058 Cannot find device "nvmf_init_if2" 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:05.058 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:05.059 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.059 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.059 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.059 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:05.059 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:05.059 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:05.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:10:05.317 00:10:05.317 --- 10.0.0.3 ping statistics --- 00:10:05.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.317 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:05.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:05.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:10:05.317 00:10:05.317 --- 10.0.0.4 ping statistics --- 00:10:05.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.317 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:10:05.317 00:10:05.317 --- 10.0.0.1 ping statistics --- 00:10:05.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.317 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:05.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:05.317 00:10:05.317 --- 10.0.0.2 ping statistics --- 00:10:05.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.317 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # return 0 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=67709 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 67709 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67709 ']' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.317 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.693 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.693 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:06.693 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:06.693 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.693 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67741 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=48053fd952b92408e42f34983c030db73ba0df0c70c0805c 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.da4 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 48053fd952b92408e42f34983c030db73ba0df0c70c0805c 0 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 48053fd952b92408e42f34983c030db73ba0df0c70c0805c 0 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=48053fd952b92408e42f34983c030db73ba0df0c70c0805c 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.da4 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.da4 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.da4 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8a597778ffd685096d1839bbccdbfc4796acc6df0481bc5f092852aa36e8796f 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.1yh 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8a597778ffd685096d1839bbccdbfc4796acc6df0481bc5f092852aa36e8796f 3 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8a597778ffd685096d1839bbccdbfc4796acc6df0481bc5f092852aa36e8796f 3 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8a597778ffd685096d1839bbccdbfc4796acc6df0481bc5f092852aa36e8796f 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.1yh 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.1yh 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1yh 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c04b5df65896828e2971eed0a06176b4 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.oN1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c04b5df65896828e2971eed0a06176b4 1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c04b5df65896828e2971eed0a06176b4 1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c04b5df65896828e2971eed0a06176b4 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.oN1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.oN1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.oN1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0f1687e1de77a8c45c26561e3499a11ad45a9eb86ec1149b 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.STJ 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0f1687e1de77a8c45c26561e3499a11ad45a9eb86ec1149b 2 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0f1687e1de77a8c45c26561e3499a11ad45a9eb86ec1149b 2 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0f1687e1de77a8c45c26561e3499a11ad45a9eb86ec1149b 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.STJ 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.STJ 00:10:06.693 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.STJ 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f9ab942fab885be6c3e211aa9cad4905835441ecc5359a9d 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.0sR 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f9ab942fab885be6c3e211aa9cad4905835441ecc5359a9d 2 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f9ab942fab885be6c3e211aa9cad4905835441ecc5359a9d 2 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f9ab942fab885be6c3e211aa9cad4905835441ecc5359a9d 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.0sR 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.0sR 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.0sR 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=bc729b09bd89c5805af3e3f1e73e3b1f 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.JVd 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key bc729b09bd89c5805af3e3f1e73e3b1f 1 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 bc729b09bd89c5805af3e3f1e73e3b1f 1 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=bc729b09bd89c5805af3e3f1e73e3b1f 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:10:06.694 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.JVd 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.JVd 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.JVd 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9bdf58c51eb5e138d02759c596f04ab982d7f7aaa9aa48cb8bab14b54ee98620 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.uS5 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9bdf58c51eb5e138d02759c596f04ab982d7f7aaa9aa48cb8bab14b54ee98620 3 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9bdf58c51eb5e138d02759c596f04ab982d7f7aaa9aa48cb8bab14b54ee98620 3 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9bdf58c51eb5e138d02759c596f04ab982d7f7aaa9aa48cb8bab14b54ee98620 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:06.951 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.uS5 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.uS5 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.uS5 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67709 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67709 ']' 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.952 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67741 /var/tmp/host.sock 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67741 ']' 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.210 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.da4 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.da4 00:10:07.468 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.da4 00:10:07.761 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1yh ]] 00:10:07.761 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1yh 00:10:07.761 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.761 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.761 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.761 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1yh 00:10:07.761 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1yh 00:10:08.019 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:08.019 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oN1 00:10:08.019 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.019 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.019 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.019 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.oN1 00:10:08.019 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.oN1 00:10:08.277 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.STJ ]] 00:10:08.277 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.STJ 00:10:08.277 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.277 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.277 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.277 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.STJ 00:10:08.277 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.STJ 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0sR 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0sR 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0sR 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.JVd ]] 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVd 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVd 00:10:08.844 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVd 00:10:09.410 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:09.410 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uS5 00:10:09.410 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.410 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.410 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.410 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.uS5 00:10:09.410 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.uS5 00:10:09.669 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:09.669 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:09.669 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:09.669 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.669 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.669 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.927 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:10.184 00:10:10.184 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.184 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.184 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.442 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.442 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.442 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.442 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.442 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.442 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.442 { 00:10:10.442 "cntlid": 1, 00:10:10.442 "qid": 0, 00:10:10.442 "state": "enabled", 00:10:10.442 "thread": "nvmf_tgt_poll_group_000", 00:10:10.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:10.442 "listen_address": { 00:10:10.442 "trtype": "TCP", 00:10:10.442 "adrfam": "IPv4", 00:10:10.442 "traddr": "10.0.0.3", 00:10:10.442 "trsvcid": "4420" 00:10:10.442 }, 00:10:10.442 "peer_address": { 00:10:10.442 "trtype": "TCP", 00:10:10.442 "adrfam": "IPv4", 00:10:10.442 "traddr": "10.0.0.1", 00:10:10.442 "trsvcid": "47324" 00:10:10.442 }, 00:10:10.442 "auth": { 00:10:10.442 "state": "completed", 00:10:10.442 "digest": "sha256", 00:10:10.442 "dhgroup": "null" 00:10:10.442 } 00:10:10.442 } 00:10:10.442 ]' 00:10:10.442 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.442 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.442 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.442 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:10.442 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.700 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.701 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.701 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.959 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:10.959 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:15.145 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.146 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:15.146 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.146 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.146 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.146 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.146 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:15.146 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.404 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.663 00:10:15.663 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.663 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.663 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.921 { 00:10:15.921 "cntlid": 3, 00:10:15.921 "qid": 0, 00:10:15.921 "state": "enabled", 00:10:15.921 "thread": "nvmf_tgt_poll_group_000", 00:10:15.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:15.921 "listen_address": { 00:10:15.921 "trtype": "TCP", 00:10:15.921 "adrfam": "IPv4", 00:10:15.921 "traddr": "10.0.0.3", 00:10:15.921 "trsvcid": "4420" 00:10:15.921 }, 00:10:15.921 "peer_address": { 00:10:15.921 "trtype": "TCP", 00:10:15.921 "adrfam": "IPv4", 00:10:15.921 "traddr": "10.0.0.1", 00:10:15.921 "trsvcid": "47350" 00:10:15.921 }, 00:10:15.921 "auth": { 00:10:15.921 "state": "completed", 00:10:15.921 "digest": "sha256", 00:10:15.921 "dhgroup": "null" 00:10:15.921 } 00:10:15.921 } 00:10:15.921 ]' 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.921 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.180 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:16.180 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.180 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.180 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.180 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.438 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:16.438 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:17.004 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.004 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:17.004 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.004 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.261 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.262 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.262 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.262 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.519 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:17.519 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.519 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.777 00:10:17.777 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.777 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.777 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.035 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.035 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.035 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.035 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.035 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.035 { 00:10:18.035 "cntlid": 5, 00:10:18.035 "qid": 0, 00:10:18.035 "state": "enabled", 00:10:18.035 "thread": "nvmf_tgt_poll_group_000", 00:10:18.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:18.035 "listen_address": { 00:10:18.035 "trtype": "TCP", 00:10:18.035 "adrfam": "IPv4", 00:10:18.035 "traddr": "10.0.0.3", 00:10:18.035 "trsvcid": "4420" 00:10:18.035 }, 00:10:18.035 "peer_address": { 00:10:18.035 "trtype": "TCP", 00:10:18.035 "adrfam": "IPv4", 00:10:18.035 "traddr": "10.0.0.1", 00:10:18.035 "trsvcid": "47382" 00:10:18.035 }, 00:10:18.035 "auth": { 00:10:18.035 "state": "completed", 00:10:18.035 "digest": "sha256", 00:10:18.035 "dhgroup": "null" 00:10:18.035 } 00:10:18.035 } 00:10:18.035 ]' 00:10:18.035 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.293 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.293 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.293 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:18.293 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.293 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.293 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.293 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:18.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:19.120 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:19.391 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:19.665 00:10:19.665 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.665 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.665 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.924 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.924 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.924 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.924 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.182 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.182 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.182 { 00:10:20.182 "cntlid": 7, 00:10:20.182 "qid": 0, 00:10:20.182 "state": "enabled", 00:10:20.182 "thread": "nvmf_tgt_poll_group_000", 00:10:20.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:20.183 "listen_address": { 00:10:20.183 "trtype": "TCP", 00:10:20.183 "adrfam": "IPv4", 00:10:20.183 "traddr": "10.0.0.3", 00:10:20.183 "trsvcid": "4420" 00:10:20.183 }, 00:10:20.183 "peer_address": { 00:10:20.183 "trtype": "TCP", 00:10:20.183 "adrfam": "IPv4", 00:10:20.183 "traddr": "10.0.0.1", 00:10:20.183 "trsvcid": "59828" 00:10:20.183 }, 00:10:20.183 "auth": { 00:10:20.183 "state": "completed", 00:10:20.183 "digest": "sha256", 00:10:20.183 "dhgroup": "null" 00:10:20.183 } 00:10:20.183 } 00:10:20.183 ]' 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.183 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.441 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:20.441 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:21.375 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.375 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.376 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.941 00:10:21.941 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.941 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.941 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.199 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.199 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.199 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.199 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.199 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.199 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.199 { 00:10:22.199 "cntlid": 9, 00:10:22.199 "qid": 0, 00:10:22.199 "state": "enabled", 00:10:22.199 "thread": "nvmf_tgt_poll_group_000", 00:10:22.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:22.199 "listen_address": { 00:10:22.199 "trtype": "TCP", 00:10:22.199 "adrfam": "IPv4", 00:10:22.200 "traddr": "10.0.0.3", 00:10:22.200 "trsvcid": "4420" 00:10:22.200 }, 00:10:22.200 "peer_address": { 00:10:22.200 "trtype": "TCP", 00:10:22.200 "adrfam": "IPv4", 00:10:22.200 "traddr": "10.0.0.1", 00:10:22.200 "trsvcid": "59852" 00:10:22.200 }, 00:10:22.200 "auth": { 00:10:22.200 "state": "completed", 00:10:22.200 "digest": "sha256", 00:10:22.200 "dhgroup": "ffdhe2048" 00:10:22.200 } 00:10:22.200 } 00:10:22.200 ]' 00:10:22.200 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.200 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.200 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.200 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:22.200 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.458 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.458 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.458 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.716 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:22.716 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:23.283 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:23.541 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:23.541 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.542 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.800 00:10:23.800 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.800 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.800 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.059 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.059 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.059 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.059 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.059 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.059 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.059 { 00:10:24.059 "cntlid": 11, 00:10:24.059 "qid": 0, 00:10:24.059 "state": "enabled", 00:10:24.059 "thread": "nvmf_tgt_poll_group_000", 00:10:24.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:24.059 "listen_address": { 00:10:24.059 "trtype": "TCP", 00:10:24.059 "adrfam": "IPv4", 00:10:24.059 "traddr": "10.0.0.3", 00:10:24.059 "trsvcid": "4420" 00:10:24.059 }, 00:10:24.059 "peer_address": { 00:10:24.059 "trtype": "TCP", 00:10:24.059 "adrfam": "IPv4", 00:10:24.059 "traddr": "10.0.0.1", 00:10:24.059 "trsvcid": "59870" 00:10:24.059 }, 00:10:24.059 "auth": { 00:10:24.059 "state": "completed", 00:10:24.059 "digest": "sha256", 00:10:24.059 "dhgroup": "ffdhe2048" 00:10:24.059 } 00:10:24.059 } 00:10:24.059 ]' 00:10:24.059 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.317 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.317 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.317 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:24.317 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.317 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.317 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.317 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.576 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:24.576 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:25.144 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.406 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.980 00:10:25.980 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.980 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.980 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.238 { 00:10:26.238 "cntlid": 13, 00:10:26.238 "qid": 0, 00:10:26.238 "state": "enabled", 00:10:26.238 "thread": "nvmf_tgt_poll_group_000", 00:10:26.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:26.238 "listen_address": { 00:10:26.238 "trtype": "TCP", 00:10:26.238 "adrfam": "IPv4", 00:10:26.238 "traddr": "10.0.0.3", 00:10:26.238 "trsvcid": "4420" 00:10:26.238 }, 00:10:26.238 "peer_address": { 00:10:26.238 "trtype": "TCP", 00:10:26.238 "adrfam": "IPv4", 00:10:26.238 "traddr": "10.0.0.1", 00:10:26.238 "trsvcid": "59894" 00:10:26.238 }, 00:10:26.238 "auth": { 00:10:26.238 "state": "completed", 00:10:26.238 "digest": "sha256", 00:10:26.238 "dhgroup": "ffdhe2048" 00:10:26.238 } 00:10:26.238 } 00:10:26.238 ]' 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.238 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.806 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:26.806 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:27.372 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:27.629 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:27.629 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.629 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:27.629 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.630 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.887 00:10:27.887 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.887 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.887 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.453 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.453 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.454 { 00:10:28.454 "cntlid": 15, 00:10:28.454 "qid": 0, 00:10:28.454 "state": "enabled", 00:10:28.454 "thread": "nvmf_tgt_poll_group_000", 00:10:28.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:28.454 "listen_address": { 00:10:28.454 "trtype": "TCP", 00:10:28.454 "adrfam": "IPv4", 00:10:28.454 "traddr": "10.0.0.3", 00:10:28.454 "trsvcid": "4420" 00:10:28.454 }, 00:10:28.454 "peer_address": { 00:10:28.454 "trtype": "TCP", 00:10:28.454 "adrfam": "IPv4", 00:10:28.454 "traddr": "10.0.0.1", 00:10:28.454 "trsvcid": "59908" 00:10:28.454 }, 00:10:28.454 "auth": { 00:10:28.454 "state": "completed", 00:10:28.454 "digest": "sha256", 00:10:28.454 "dhgroup": "ffdhe2048" 00:10:28.454 } 00:10:28.454 } 00:10:28.454 ]' 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.454 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.711 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:28.711 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.278 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.536 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.102 00:10:30.102 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.102 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.102 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.361 { 00:10:30.361 "cntlid": 17, 00:10:30.361 "qid": 0, 00:10:30.361 "state": "enabled", 00:10:30.361 "thread": "nvmf_tgt_poll_group_000", 00:10:30.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:30.361 "listen_address": { 00:10:30.361 "trtype": "TCP", 00:10:30.361 "adrfam": "IPv4", 00:10:30.361 "traddr": "10.0.0.3", 00:10:30.361 "trsvcid": "4420" 00:10:30.361 }, 00:10:30.361 "peer_address": { 00:10:30.361 "trtype": "TCP", 00:10:30.361 "adrfam": "IPv4", 00:10:30.361 "traddr": "10.0.0.1", 00:10:30.361 "trsvcid": "50372" 00:10:30.361 }, 00:10:30.361 "auth": { 00:10:30.361 "state": "completed", 00:10:30.361 "digest": "sha256", 00:10:30.361 "dhgroup": "ffdhe3072" 00:10:30.361 } 00:10:30.361 } 00:10:30.361 ]' 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.361 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.619 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:30.619 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.185 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.752 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.011 00:10:32.011 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.011 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.011 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.270 { 00:10:32.270 "cntlid": 19, 00:10:32.270 "qid": 0, 00:10:32.270 "state": "enabled", 00:10:32.270 "thread": "nvmf_tgt_poll_group_000", 00:10:32.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:32.270 "listen_address": { 00:10:32.270 "trtype": "TCP", 00:10:32.270 "adrfam": "IPv4", 00:10:32.270 "traddr": "10.0.0.3", 00:10:32.270 "trsvcid": "4420" 00:10:32.270 }, 00:10:32.270 "peer_address": { 00:10:32.270 "trtype": "TCP", 00:10:32.270 "adrfam": "IPv4", 00:10:32.270 "traddr": "10.0.0.1", 00:10:32.270 "trsvcid": "50394" 00:10:32.270 }, 00:10:32.270 "auth": { 00:10:32.270 "state": "completed", 00:10:32.270 "digest": "sha256", 00:10:32.270 "dhgroup": "ffdhe3072" 00:10:32.270 } 00:10:32.270 } 00:10:32.270 ]' 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:32.270 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.528 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.528 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.528 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.787 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:32.787 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.355 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.614 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.872 00:10:34.130 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.130 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.130 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.388 { 00:10:34.388 "cntlid": 21, 00:10:34.388 "qid": 0, 00:10:34.388 "state": "enabled", 00:10:34.388 "thread": "nvmf_tgt_poll_group_000", 00:10:34.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:34.388 "listen_address": { 00:10:34.388 "trtype": "TCP", 00:10:34.388 "adrfam": "IPv4", 00:10:34.388 "traddr": "10.0.0.3", 00:10:34.388 "trsvcid": "4420" 00:10:34.388 }, 00:10:34.388 "peer_address": { 00:10:34.388 "trtype": "TCP", 00:10:34.388 "adrfam": "IPv4", 00:10:34.388 "traddr": "10.0.0.1", 00:10:34.388 "trsvcid": "50428" 00:10:34.388 }, 00:10:34.388 "auth": { 00:10:34.388 "state": "completed", 00:10:34.388 "digest": "sha256", 00:10:34.388 "dhgroup": "ffdhe3072" 00:10:34.388 } 00:10:34.388 } 00:10:34.388 ]' 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.388 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.388 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:34.388 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.647 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.647 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.647 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.905 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:34.905 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:35.473 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:35.731 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:35.990 00:10:36.249 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.249 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.249 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.249 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.249 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.249 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.249 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.515 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.516 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.516 { 00:10:36.516 "cntlid": 23, 00:10:36.516 "qid": 0, 00:10:36.516 "state": "enabled", 00:10:36.516 "thread": "nvmf_tgt_poll_group_000", 00:10:36.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:36.516 "listen_address": { 00:10:36.516 "trtype": "TCP", 00:10:36.516 "adrfam": "IPv4", 00:10:36.516 "traddr": "10.0.0.3", 00:10:36.516 "trsvcid": "4420" 00:10:36.516 }, 00:10:36.516 "peer_address": { 00:10:36.516 "trtype": "TCP", 00:10:36.516 "adrfam": "IPv4", 00:10:36.516 "traddr": "10.0.0.1", 00:10:36.516 "trsvcid": "50458" 00:10:36.516 }, 00:10:36.516 "auth": { 00:10:36.516 "state": "completed", 00:10:36.516 "digest": "sha256", 00:10:36.516 "dhgroup": "ffdhe3072" 00:10:36.516 } 00:10:36.516 } 00:10:36.516 ]' 00:10:36.516 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.516 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.516 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.516 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:36.516 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.516 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.516 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.516 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.785 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:36.785 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.721 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.287 00:10:38.287 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.287 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.287 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.287 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.287 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.287 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.287 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.545 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.545 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.545 { 00:10:38.545 "cntlid": 25, 00:10:38.545 "qid": 0, 00:10:38.545 "state": "enabled", 00:10:38.545 "thread": "nvmf_tgt_poll_group_000", 00:10:38.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:38.545 "listen_address": { 00:10:38.545 "trtype": "TCP", 00:10:38.545 "adrfam": "IPv4", 00:10:38.545 "traddr": "10.0.0.3", 00:10:38.545 "trsvcid": "4420" 00:10:38.545 }, 00:10:38.545 "peer_address": { 00:10:38.545 "trtype": "TCP", 00:10:38.545 "adrfam": "IPv4", 00:10:38.545 "traddr": "10.0.0.1", 00:10:38.545 "trsvcid": "50480" 00:10:38.545 }, 00:10:38.545 "auth": { 00:10:38.545 "state": "completed", 00:10:38.545 "digest": "sha256", 00:10:38.545 "dhgroup": "ffdhe4096" 00:10:38.545 } 00:10:38.545 } 00:10:38.545 ]' 00:10:38.545 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.545 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.545 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.545 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:38.545 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.545 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.545 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.545 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.803 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:38.803 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:39.370 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.629 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:39.629 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.629 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.629 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.629 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.629 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.629 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.887 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.888 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.888 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.146 00:10:40.146 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.146 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.146 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.404 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.404 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.404 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.404 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.404 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.404 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.404 { 00:10:40.404 "cntlid": 27, 00:10:40.404 "qid": 0, 00:10:40.404 "state": "enabled", 00:10:40.404 "thread": "nvmf_tgt_poll_group_000", 00:10:40.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:40.404 "listen_address": { 00:10:40.404 "trtype": "TCP", 00:10:40.404 "adrfam": "IPv4", 00:10:40.404 "traddr": "10.0.0.3", 00:10:40.404 "trsvcid": "4420" 00:10:40.404 }, 00:10:40.404 "peer_address": { 00:10:40.404 "trtype": "TCP", 00:10:40.404 "adrfam": "IPv4", 00:10:40.404 "traddr": "10.0.0.1", 00:10:40.404 "trsvcid": "44954" 00:10:40.404 }, 00:10:40.404 "auth": { 00:10:40.404 "state": "completed", 00:10:40.404 "digest": "sha256", 00:10:40.404 "dhgroup": "ffdhe4096" 00:10:40.404 } 00:10:40.404 } 00:10:40.404 ]' 00:10:40.404 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.662 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.662 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.662 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.662 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.662 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.662 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.662 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.920 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:40.920 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:41.487 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.054 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.313 00:10:42.313 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.313 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.313 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.571 { 00:10:42.571 "cntlid": 29, 00:10:42.571 "qid": 0, 00:10:42.571 "state": "enabled", 00:10:42.571 "thread": "nvmf_tgt_poll_group_000", 00:10:42.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:42.571 "listen_address": { 00:10:42.571 "trtype": "TCP", 00:10:42.571 "adrfam": "IPv4", 00:10:42.571 "traddr": "10.0.0.3", 00:10:42.571 "trsvcid": "4420" 00:10:42.571 }, 00:10:42.571 "peer_address": { 00:10:42.571 "trtype": "TCP", 00:10:42.571 "adrfam": "IPv4", 00:10:42.571 "traddr": "10.0.0.1", 00:10:42.571 "trsvcid": "44978" 00:10:42.571 }, 00:10:42.571 "auth": { 00:10:42.571 "state": "completed", 00:10:42.571 "digest": "sha256", 00:10:42.571 "dhgroup": "ffdhe4096" 00:10:42.571 } 00:10:42.571 } 00:10:42.571 ]' 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.571 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.829 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:42.829 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.829 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.829 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.829 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.087 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:43.087 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:43.654 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.913 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.172 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.172 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:44.172 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.172 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.431 00:10:44.431 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.431 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.431 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.690 { 00:10:44.690 "cntlid": 31, 00:10:44.690 "qid": 0, 00:10:44.690 "state": "enabled", 00:10:44.690 "thread": "nvmf_tgt_poll_group_000", 00:10:44.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:44.690 "listen_address": { 00:10:44.690 "trtype": "TCP", 00:10:44.690 "adrfam": "IPv4", 00:10:44.690 "traddr": "10.0.0.3", 00:10:44.690 "trsvcid": "4420" 00:10:44.690 }, 00:10:44.690 "peer_address": { 00:10:44.690 "trtype": "TCP", 00:10:44.690 "adrfam": "IPv4", 00:10:44.690 "traddr": "10.0.0.1", 00:10:44.690 "trsvcid": "45012" 00:10:44.690 }, 00:10:44.690 "auth": { 00:10:44.690 "state": "completed", 00:10:44.690 "digest": "sha256", 00:10:44.690 "dhgroup": "ffdhe4096" 00:10:44.690 } 00:10:44.690 } 00:10:44.690 ]' 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.690 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.949 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:44.949 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.949 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.949 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.949 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.208 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:45.208 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.145 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.712 00:10:46.712 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.712 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.712 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.970 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.970 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.970 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.970 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.970 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.970 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.970 { 00:10:46.970 "cntlid": 33, 00:10:46.970 "qid": 0, 00:10:46.970 "state": "enabled", 00:10:46.970 "thread": "nvmf_tgt_poll_group_000", 00:10:46.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:46.970 "listen_address": { 00:10:46.970 "trtype": "TCP", 00:10:46.970 "adrfam": "IPv4", 00:10:46.970 "traddr": "10.0.0.3", 00:10:46.970 "trsvcid": "4420" 00:10:46.970 }, 00:10:46.970 "peer_address": { 00:10:46.970 "trtype": "TCP", 00:10:46.970 "adrfam": "IPv4", 00:10:46.970 "traddr": "10.0.0.1", 00:10:46.970 "trsvcid": "45042" 00:10:46.970 }, 00:10:46.970 "auth": { 00:10:46.970 "state": "completed", 00:10:46.970 "digest": "sha256", 00:10:46.970 "dhgroup": "ffdhe6144" 00:10:46.970 } 00:10:46.970 } 00:10:46.970 ]' 00:10:46.970 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.228 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.228 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.228 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:47.228 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.228 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.228 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.229 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.487 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:47.487 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.423 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.423 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.991 00:10:48.991 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.991 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.991 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.250 { 00:10:49.250 "cntlid": 35, 00:10:49.250 "qid": 0, 00:10:49.250 "state": "enabled", 00:10:49.250 "thread": "nvmf_tgt_poll_group_000", 00:10:49.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:49.250 "listen_address": { 00:10:49.250 "trtype": "TCP", 00:10:49.250 "adrfam": "IPv4", 00:10:49.250 "traddr": "10.0.0.3", 00:10:49.250 "trsvcid": "4420" 00:10:49.250 }, 00:10:49.250 "peer_address": { 00:10:49.250 "trtype": "TCP", 00:10:49.250 "adrfam": "IPv4", 00:10:49.250 "traddr": "10.0.0.1", 00:10:49.250 "trsvcid": "45082" 00:10:49.250 }, 00:10:49.250 "auth": { 00:10:49.250 "state": "completed", 00:10:49.250 "digest": "sha256", 00:10:49.250 "dhgroup": "ffdhe6144" 00:10:49.250 } 00:10:49.250 } 00:10:49.250 ]' 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.250 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.509 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:49.509 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.509 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.509 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.509 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.767 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:49.767 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.335 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.594 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.162 00:10:51.162 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.162 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.162 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.420 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.420 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.420 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.420 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.421 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.421 { 00:10:51.421 "cntlid": 37, 00:10:51.421 "qid": 0, 00:10:51.421 "state": "enabled", 00:10:51.421 "thread": "nvmf_tgt_poll_group_000", 00:10:51.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:51.421 "listen_address": { 00:10:51.421 "trtype": "TCP", 00:10:51.421 "adrfam": "IPv4", 00:10:51.421 "traddr": "10.0.0.3", 00:10:51.421 "trsvcid": "4420" 00:10:51.421 }, 00:10:51.421 "peer_address": { 00:10:51.421 "trtype": "TCP", 00:10:51.421 "adrfam": "IPv4", 00:10:51.421 "traddr": "10.0.0.1", 00:10:51.421 "trsvcid": "59098" 00:10:51.421 }, 00:10:51.421 "auth": { 00:10:51.421 "state": "completed", 00:10:51.421 "digest": "sha256", 00:10:51.421 "dhgroup": "ffdhe6144" 00:10:51.421 } 00:10:51.421 } 00:10:51.421 ]' 00:10:51.421 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.421 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.421 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.421 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:51.421 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.679 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.679 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.679 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.938 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:51.938 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:10:52.505 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.505 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:52.505 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.505 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.505 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.505 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.505 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:52.505 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.764 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.765 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.765 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:52.765 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.765 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:53.336 00:10:53.336 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.336 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.336 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.595 { 00:10:53.595 "cntlid": 39, 00:10:53.595 "qid": 0, 00:10:53.595 "state": "enabled", 00:10:53.595 "thread": "nvmf_tgt_poll_group_000", 00:10:53.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:53.595 "listen_address": { 00:10:53.595 "trtype": "TCP", 00:10:53.595 "adrfam": "IPv4", 00:10:53.595 "traddr": "10.0.0.3", 00:10:53.595 "trsvcid": "4420" 00:10:53.595 }, 00:10:53.595 "peer_address": { 00:10:53.595 "trtype": "TCP", 00:10:53.595 "adrfam": "IPv4", 00:10:53.595 "traddr": "10.0.0.1", 00:10:53.595 "trsvcid": "59118" 00:10:53.595 }, 00:10:53.595 "auth": { 00:10:53.595 "state": "completed", 00:10:53.595 "digest": "sha256", 00:10:53.595 "dhgroup": "ffdhe6144" 00:10:53.595 } 00:10:53.595 } 00:10:53.595 ]' 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.595 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.163 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:54.163 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.730 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.990 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.558 00:10:55.558 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.558 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.558 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.817 { 00:10:55.817 "cntlid": 41, 00:10:55.817 "qid": 0, 00:10:55.817 "state": "enabled", 00:10:55.817 "thread": "nvmf_tgt_poll_group_000", 00:10:55.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:55.817 "listen_address": { 00:10:55.817 "trtype": "TCP", 00:10:55.817 "adrfam": "IPv4", 00:10:55.817 "traddr": "10.0.0.3", 00:10:55.817 "trsvcid": "4420" 00:10:55.817 }, 00:10:55.817 "peer_address": { 00:10:55.817 "trtype": "TCP", 00:10:55.817 "adrfam": "IPv4", 00:10:55.817 "traddr": "10.0.0.1", 00:10:55.817 "trsvcid": "59148" 00:10:55.817 }, 00:10:55.817 "auth": { 00:10:55.817 "state": "completed", 00:10:55.817 "digest": "sha256", 00:10:55.817 "dhgroup": "ffdhe8192" 00:10:55.817 } 00:10:55.817 } 00:10:55.817 ]' 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.817 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.385 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:56.385 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.951 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.209 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.146 00:10:58.146 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.146 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.146 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.146 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.146 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.146 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.146 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.405 { 00:10:58.405 "cntlid": 43, 00:10:58.405 "qid": 0, 00:10:58.405 "state": "enabled", 00:10:58.405 "thread": "nvmf_tgt_poll_group_000", 00:10:58.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:10:58.405 "listen_address": { 00:10:58.405 "trtype": "TCP", 00:10:58.405 "adrfam": "IPv4", 00:10:58.405 "traddr": "10.0.0.3", 00:10:58.405 "trsvcid": "4420" 00:10:58.405 }, 00:10:58.405 "peer_address": { 00:10:58.405 "trtype": "TCP", 00:10:58.405 "adrfam": "IPv4", 00:10:58.405 "traddr": "10.0.0.1", 00:10:58.405 "trsvcid": "59164" 00:10:58.405 }, 00:10:58.405 "auth": { 00:10:58.405 "state": "completed", 00:10:58.405 "digest": "sha256", 00:10:58.405 "dhgroup": "ffdhe8192" 00:10:58.405 } 00:10:58.405 } 00:10:58.405 ]' 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.405 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.664 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:58.664 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:59.599 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:59.858 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:59.858 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.859 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.427 00:11:00.427 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.427 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.427 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.686 { 00:11:00.686 "cntlid": 45, 00:11:00.686 "qid": 0, 00:11:00.686 "state": "enabled", 00:11:00.686 "thread": "nvmf_tgt_poll_group_000", 00:11:00.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:00.686 "listen_address": { 00:11:00.686 "trtype": "TCP", 00:11:00.686 "adrfam": "IPv4", 00:11:00.686 "traddr": "10.0.0.3", 00:11:00.686 "trsvcid": "4420" 00:11:00.686 }, 00:11:00.686 "peer_address": { 00:11:00.686 "trtype": "TCP", 00:11:00.686 "adrfam": "IPv4", 00:11:00.686 "traddr": "10.0.0.1", 00:11:00.686 "trsvcid": "40182" 00:11:00.686 }, 00:11:00.686 "auth": { 00:11:00.686 "state": "completed", 00:11:00.686 "digest": "sha256", 00:11:00.686 "dhgroup": "ffdhe8192" 00:11:00.686 } 00:11:00.686 } 00:11:00.686 ]' 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.686 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.945 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:00.945 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.945 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.945 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.945 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.204 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:01.204 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:01.772 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.340 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.907 00:11:02.907 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.907 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.907 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.166 { 00:11:03.166 "cntlid": 47, 00:11:03.166 "qid": 0, 00:11:03.166 "state": "enabled", 00:11:03.166 "thread": "nvmf_tgt_poll_group_000", 00:11:03.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:03.166 "listen_address": { 00:11:03.166 "trtype": "TCP", 00:11:03.166 "adrfam": "IPv4", 00:11:03.166 "traddr": "10.0.0.3", 00:11:03.166 "trsvcid": "4420" 00:11:03.166 }, 00:11:03.166 "peer_address": { 00:11:03.166 "trtype": "TCP", 00:11:03.166 "adrfam": "IPv4", 00:11:03.166 "traddr": "10.0.0.1", 00:11:03.166 "trsvcid": "40210" 00:11:03.166 }, 00:11:03.166 "auth": { 00:11:03.166 "state": "completed", 00:11:03.166 "digest": "sha256", 00:11:03.166 "dhgroup": "ffdhe8192" 00:11:03.166 } 00:11:03.166 } 00:11:03.166 ]' 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:03.166 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.426 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.426 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.426 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.685 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:03.685 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.253 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.512 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.084 00:11:05.084 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.084 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.084 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.084 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.344 { 00:11:05.344 "cntlid": 49, 00:11:05.344 "qid": 0, 00:11:05.344 "state": "enabled", 00:11:05.344 "thread": "nvmf_tgt_poll_group_000", 00:11:05.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:05.344 "listen_address": { 00:11:05.344 "trtype": "TCP", 00:11:05.344 "adrfam": "IPv4", 00:11:05.344 "traddr": "10.0.0.3", 00:11:05.344 "trsvcid": "4420" 00:11:05.344 }, 00:11:05.344 "peer_address": { 00:11:05.344 "trtype": "TCP", 00:11:05.344 "adrfam": "IPv4", 00:11:05.344 "traddr": "10.0.0.1", 00:11:05.344 "trsvcid": "40242" 00:11:05.344 }, 00:11:05.344 "auth": { 00:11:05.344 "state": "completed", 00:11:05.344 "digest": "sha384", 00:11:05.344 "dhgroup": "null" 00:11:05.344 } 00:11:05.344 } 00:11:05.344 ]' 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.344 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.603 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:05.603 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:06.540 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.799 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.058 00:11:07.058 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.058 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.058 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.317 { 00:11:07.317 "cntlid": 51, 00:11:07.317 "qid": 0, 00:11:07.317 "state": "enabled", 00:11:07.317 "thread": "nvmf_tgt_poll_group_000", 00:11:07.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:07.317 "listen_address": { 00:11:07.317 "trtype": "TCP", 00:11:07.317 "adrfam": "IPv4", 00:11:07.317 "traddr": "10.0.0.3", 00:11:07.317 "trsvcid": "4420" 00:11:07.317 }, 00:11:07.317 "peer_address": { 00:11:07.317 "trtype": "TCP", 00:11:07.317 "adrfam": "IPv4", 00:11:07.317 "traddr": "10.0.0.1", 00:11:07.317 "trsvcid": "40274" 00:11:07.317 }, 00:11:07.317 "auth": { 00:11:07.317 "state": "completed", 00:11:07.317 "digest": "sha384", 00:11:07.317 "dhgroup": "null" 00:11:07.317 } 00:11:07.317 } 00:11:07.317 ]' 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:07.317 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.576 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.576 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.576 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.835 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:07.835 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:08.401 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:08.659 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:08.659 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.659 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.659 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.660 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.226 00:11:09.226 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.226 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.226 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.485 { 00:11:09.485 "cntlid": 53, 00:11:09.485 "qid": 0, 00:11:09.485 "state": "enabled", 00:11:09.485 "thread": "nvmf_tgt_poll_group_000", 00:11:09.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:09.485 "listen_address": { 00:11:09.485 "trtype": "TCP", 00:11:09.485 "adrfam": "IPv4", 00:11:09.485 "traddr": "10.0.0.3", 00:11:09.485 "trsvcid": "4420" 00:11:09.485 }, 00:11:09.485 "peer_address": { 00:11:09.485 "trtype": "TCP", 00:11:09.485 "adrfam": "IPv4", 00:11:09.485 "traddr": "10.0.0.1", 00:11:09.485 "trsvcid": "43932" 00:11:09.485 }, 00:11:09.485 "auth": { 00:11:09.485 "state": "completed", 00:11:09.485 "digest": "sha384", 00:11:09.485 "dhgroup": "null" 00:11:09.485 } 00:11:09.485 } 00:11:09.485 ]' 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.485 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.486 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:09.486 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.486 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.486 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.486 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.053 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:10.054 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:10.623 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:10.888 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:10.889 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.147 00:11:11.147 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.147 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.147 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.406 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.406 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.406 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.406 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.664 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.665 { 00:11:11.665 "cntlid": 55, 00:11:11.665 "qid": 0, 00:11:11.665 "state": "enabled", 00:11:11.665 "thread": "nvmf_tgt_poll_group_000", 00:11:11.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:11.665 "listen_address": { 00:11:11.665 "trtype": "TCP", 00:11:11.665 "adrfam": "IPv4", 00:11:11.665 "traddr": "10.0.0.3", 00:11:11.665 "trsvcid": "4420" 00:11:11.665 }, 00:11:11.665 "peer_address": { 00:11:11.665 "trtype": "TCP", 00:11:11.665 "adrfam": "IPv4", 00:11:11.665 "traddr": "10.0.0.1", 00:11:11.665 "trsvcid": "43974" 00:11:11.665 }, 00:11:11.665 "auth": { 00:11:11.665 "state": "completed", 00:11:11.665 "digest": "sha384", 00:11:11.665 "dhgroup": "null" 00:11:11.665 } 00:11:11.665 } 00:11:11.665 ]' 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.665 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.923 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:11.923 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.490 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.748 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.314 00:11:13.314 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.314 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.314 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.572 { 00:11:13.572 "cntlid": 57, 00:11:13.572 "qid": 0, 00:11:13.572 "state": "enabled", 00:11:13.572 "thread": "nvmf_tgt_poll_group_000", 00:11:13.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:13.572 "listen_address": { 00:11:13.572 "trtype": "TCP", 00:11:13.572 "adrfam": "IPv4", 00:11:13.572 "traddr": "10.0.0.3", 00:11:13.572 "trsvcid": "4420" 00:11:13.572 }, 00:11:13.572 "peer_address": { 00:11:13.572 "trtype": "TCP", 00:11:13.572 "adrfam": "IPv4", 00:11:13.572 "traddr": "10.0.0.1", 00:11:13.572 "trsvcid": "44014" 00:11:13.572 }, 00:11:13.572 "auth": { 00:11:13.572 "state": "completed", 00:11:13.572 "digest": "sha384", 00:11:13.572 "dhgroup": "ffdhe2048" 00:11:13.572 } 00:11:13.572 } 00:11:13.572 ]' 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.572 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.573 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.573 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.573 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.831 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:13.831 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:14.765 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.766 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.766 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.766 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.766 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.766 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.766 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.766 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.332 00:11:15.332 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.332 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.332 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.591 { 00:11:15.591 "cntlid": 59, 00:11:15.591 "qid": 0, 00:11:15.591 "state": "enabled", 00:11:15.591 "thread": "nvmf_tgt_poll_group_000", 00:11:15.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:15.591 "listen_address": { 00:11:15.591 "trtype": "TCP", 00:11:15.591 "adrfam": "IPv4", 00:11:15.591 "traddr": "10.0.0.3", 00:11:15.591 "trsvcid": "4420" 00:11:15.591 }, 00:11:15.591 "peer_address": { 00:11:15.591 "trtype": "TCP", 00:11:15.591 "adrfam": "IPv4", 00:11:15.591 "traddr": "10.0.0.1", 00:11:15.591 "trsvcid": "44044" 00:11:15.591 }, 00:11:15.591 "auth": { 00:11:15.591 "state": "completed", 00:11:15.591 "digest": "sha384", 00:11:15.591 "dhgroup": "ffdhe2048" 00:11:15.591 } 00:11:15.591 } 00:11:15.591 ]' 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.591 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.862 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:15.862 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:16.451 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:16.709 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:16.709 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.709 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.709 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:16.709 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:16.710 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.710 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.710 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.710 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.967 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.967 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.967 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.967 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.225 00:11:17.225 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.225 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.225 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.484 { 00:11:17.484 "cntlid": 61, 00:11:17.484 "qid": 0, 00:11:17.484 "state": "enabled", 00:11:17.484 "thread": "nvmf_tgt_poll_group_000", 00:11:17.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:17.484 "listen_address": { 00:11:17.484 "trtype": "TCP", 00:11:17.484 "adrfam": "IPv4", 00:11:17.484 "traddr": "10.0.0.3", 00:11:17.484 "trsvcid": "4420" 00:11:17.484 }, 00:11:17.484 "peer_address": { 00:11:17.484 "trtype": "TCP", 00:11:17.484 "adrfam": "IPv4", 00:11:17.484 "traddr": "10.0.0.1", 00:11:17.484 "trsvcid": "44074" 00:11:17.484 }, 00:11:17.484 "auth": { 00:11:17.484 "state": "completed", 00:11:17.484 "digest": "sha384", 00:11:17.484 "dhgroup": "ffdhe2048" 00:11:17.484 } 00:11:17.484 } 00:11:17.484 ]' 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:17.484 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.742 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.742 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.742 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.000 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:18.000 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:18.567 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.824 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.389 00:11:19.389 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.389 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.389 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.389 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.389 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.389 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.389 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.647 { 00:11:19.647 "cntlid": 63, 00:11:19.647 "qid": 0, 00:11:19.647 "state": "enabled", 00:11:19.647 "thread": "nvmf_tgt_poll_group_000", 00:11:19.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:19.647 "listen_address": { 00:11:19.647 "trtype": "TCP", 00:11:19.647 "adrfam": "IPv4", 00:11:19.647 "traddr": "10.0.0.3", 00:11:19.647 "trsvcid": "4420" 00:11:19.647 }, 00:11:19.647 "peer_address": { 00:11:19.647 "trtype": "TCP", 00:11:19.647 "adrfam": "IPv4", 00:11:19.647 "traddr": "10.0.0.1", 00:11:19.647 "trsvcid": "50430" 00:11:19.647 }, 00:11:19.647 "auth": { 00:11:19.647 "state": "completed", 00:11:19.647 "digest": "sha384", 00:11:19.647 "dhgroup": "ffdhe2048" 00:11:19.647 } 00:11:19.647 } 00:11:19.647 ]' 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.647 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.905 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:19.905 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.507 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.508 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.766 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.333 00:11:21.333 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.333 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.333 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.333 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.333 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.333 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.333 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.333 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.333 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.333 { 00:11:21.333 "cntlid": 65, 00:11:21.333 "qid": 0, 00:11:21.333 "state": "enabled", 00:11:21.333 "thread": "nvmf_tgt_poll_group_000", 00:11:21.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:21.333 "listen_address": { 00:11:21.333 "trtype": "TCP", 00:11:21.333 "adrfam": "IPv4", 00:11:21.333 "traddr": "10.0.0.3", 00:11:21.333 "trsvcid": "4420" 00:11:21.333 }, 00:11:21.333 "peer_address": { 00:11:21.333 "trtype": "TCP", 00:11:21.333 "adrfam": "IPv4", 00:11:21.333 "traddr": "10.0.0.1", 00:11:21.333 "trsvcid": "50470" 00:11:21.333 }, 00:11:21.333 "auth": { 00:11:21.333 "state": "completed", 00:11:21.333 "digest": "sha384", 00:11:21.333 "dhgroup": "ffdhe3072" 00:11:21.333 } 00:11:21.333 } 00:11:21.333 ]' 00:11:21.333 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.592 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.592 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.592 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.592 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.592 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.592 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.592 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.850 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:21.850 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:22.417 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.417 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:22.417 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.417 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:22.675 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:22.676 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.676 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.676 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.676 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.934 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.934 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.934 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.934 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.192 00:11:23.192 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.192 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.192 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.451 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.451 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.451 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.451 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.451 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.451 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.451 { 00:11:23.451 "cntlid": 67, 00:11:23.451 "qid": 0, 00:11:23.451 "state": "enabled", 00:11:23.451 "thread": "nvmf_tgt_poll_group_000", 00:11:23.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:23.451 "listen_address": { 00:11:23.451 "trtype": "TCP", 00:11:23.451 "adrfam": "IPv4", 00:11:23.451 "traddr": "10.0.0.3", 00:11:23.451 "trsvcid": "4420" 00:11:23.451 }, 00:11:23.451 "peer_address": { 00:11:23.451 "trtype": "TCP", 00:11:23.451 "adrfam": "IPv4", 00:11:23.451 "traddr": "10.0.0.1", 00:11:23.451 "trsvcid": "50500" 00:11:23.451 }, 00:11:23.451 "auth": { 00:11:23.451 "state": "completed", 00:11:23.451 "digest": "sha384", 00:11:23.451 "dhgroup": "ffdhe3072" 00:11:23.451 } 00:11:23.451 } 00:11:23.451 ]' 00:11:23.451 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.451 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.451 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.451 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.451 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.451 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.451 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.451 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.018 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:24.018 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:24.585 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.902 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.160 00:11:25.160 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.160 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.160 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.419 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.419 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.419 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.419 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.419 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.419 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.419 { 00:11:25.419 "cntlid": 69, 00:11:25.419 "qid": 0, 00:11:25.419 "state": "enabled", 00:11:25.419 "thread": "nvmf_tgt_poll_group_000", 00:11:25.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:25.419 "listen_address": { 00:11:25.419 "trtype": "TCP", 00:11:25.419 "adrfam": "IPv4", 00:11:25.419 "traddr": "10.0.0.3", 00:11:25.419 "trsvcid": "4420" 00:11:25.419 }, 00:11:25.419 "peer_address": { 00:11:25.419 "trtype": "TCP", 00:11:25.419 "adrfam": "IPv4", 00:11:25.419 "traddr": "10.0.0.1", 00:11:25.419 "trsvcid": "50530" 00:11:25.419 }, 00:11:25.419 "auth": { 00:11:25.419 "state": "completed", 00:11:25.419 "digest": "sha384", 00:11:25.419 "dhgroup": "ffdhe3072" 00:11:25.419 } 00:11:25.419 } 00:11:25.419 ]' 00:11:25.419 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.419 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.419 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.419 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:25.419 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.677 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.677 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.677 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.936 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:25.936 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:26.503 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.761 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.328 00:11:27.328 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.328 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.328 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.586 { 00:11:27.586 "cntlid": 71, 00:11:27.586 "qid": 0, 00:11:27.586 "state": "enabled", 00:11:27.586 "thread": "nvmf_tgt_poll_group_000", 00:11:27.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:27.586 "listen_address": { 00:11:27.586 "trtype": "TCP", 00:11:27.586 "adrfam": "IPv4", 00:11:27.586 "traddr": "10.0.0.3", 00:11:27.586 "trsvcid": "4420" 00:11:27.586 }, 00:11:27.586 "peer_address": { 00:11:27.586 "trtype": "TCP", 00:11:27.586 "adrfam": "IPv4", 00:11:27.586 "traddr": "10.0.0.1", 00:11:27.586 "trsvcid": "50564" 00:11:27.586 }, 00:11:27.586 "auth": { 00:11:27.586 "state": "completed", 00:11:27.586 "digest": "sha384", 00:11:27.586 "dhgroup": "ffdhe3072" 00:11:27.586 } 00:11:27.586 } 00:11:27.586 ]' 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.586 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.845 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:27.845 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.780 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.051 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.323 00:11:29.323 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.323 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.323 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.581 { 00:11:29.581 "cntlid": 73, 00:11:29.581 "qid": 0, 00:11:29.581 "state": "enabled", 00:11:29.581 "thread": "nvmf_tgt_poll_group_000", 00:11:29.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:29.581 "listen_address": { 00:11:29.581 "trtype": "TCP", 00:11:29.581 "adrfam": "IPv4", 00:11:29.581 "traddr": "10.0.0.3", 00:11:29.581 "trsvcid": "4420" 00:11:29.581 }, 00:11:29.581 "peer_address": { 00:11:29.581 "trtype": "TCP", 00:11:29.581 "adrfam": "IPv4", 00:11:29.581 "traddr": "10.0.0.1", 00:11:29.581 "trsvcid": "41662" 00:11:29.581 }, 00:11:29.581 "auth": { 00:11:29.581 "state": "completed", 00:11:29.581 "digest": "sha384", 00:11:29.581 "dhgroup": "ffdhe4096" 00:11:29.581 } 00:11:29.581 } 00:11:29.581 ]' 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.581 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.840 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.840 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.840 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.840 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.840 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.099 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:30.099 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.665 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.923 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.491 00:11:31.491 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.491 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.491 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.749 { 00:11:31.749 "cntlid": 75, 00:11:31.749 "qid": 0, 00:11:31.749 "state": "enabled", 00:11:31.749 "thread": "nvmf_tgt_poll_group_000", 00:11:31.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:31.749 "listen_address": { 00:11:31.749 "trtype": "TCP", 00:11:31.749 "adrfam": "IPv4", 00:11:31.749 "traddr": "10.0.0.3", 00:11:31.749 "trsvcid": "4420" 00:11:31.749 }, 00:11:31.749 "peer_address": { 00:11:31.749 "trtype": "TCP", 00:11:31.749 "adrfam": "IPv4", 00:11:31.749 "traddr": "10.0.0.1", 00:11:31.749 "trsvcid": "41702" 00:11:31.749 }, 00:11:31.749 "auth": { 00:11:31.749 "state": "completed", 00:11:31.749 "digest": "sha384", 00:11:31.749 "dhgroup": "ffdhe4096" 00:11:31.749 } 00:11:31.749 } 00:11:31.749 ]' 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.749 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.008 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.008 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.008 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.266 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:32.266 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:32.846 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:33.118 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:33.118 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.118 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.119 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.685 00:11:33.685 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.685 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.685 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.943 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.943 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.943 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.943 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.943 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.943 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.943 { 00:11:33.943 "cntlid": 77, 00:11:33.943 "qid": 0, 00:11:33.943 "state": "enabled", 00:11:33.943 "thread": "nvmf_tgt_poll_group_000", 00:11:33.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:33.943 "listen_address": { 00:11:33.943 "trtype": "TCP", 00:11:33.943 "adrfam": "IPv4", 00:11:33.943 "traddr": "10.0.0.3", 00:11:33.943 "trsvcid": "4420" 00:11:33.943 }, 00:11:33.943 "peer_address": { 00:11:33.943 "trtype": "TCP", 00:11:33.943 "adrfam": "IPv4", 00:11:33.943 "traddr": "10.0.0.1", 00:11:33.944 "trsvcid": "41748" 00:11:33.944 }, 00:11:33.944 "auth": { 00:11:33.944 "state": "completed", 00:11:33.944 "digest": "sha384", 00:11:33.944 "dhgroup": "ffdhe4096" 00:11:33.944 } 00:11:33.944 } 00:11:33.944 ]' 00:11:33.944 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.944 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.944 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.944 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.944 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.202 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.202 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.202 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.461 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:34.461 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.030 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.289 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.856 00:11:35.856 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.856 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.856 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.115 { 00:11:36.115 "cntlid": 79, 00:11:36.115 "qid": 0, 00:11:36.115 "state": "enabled", 00:11:36.115 "thread": "nvmf_tgt_poll_group_000", 00:11:36.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:36.115 "listen_address": { 00:11:36.115 "trtype": "TCP", 00:11:36.115 "adrfam": "IPv4", 00:11:36.115 "traddr": "10.0.0.3", 00:11:36.115 "trsvcid": "4420" 00:11:36.115 }, 00:11:36.115 "peer_address": { 00:11:36.115 "trtype": "TCP", 00:11:36.115 "adrfam": "IPv4", 00:11:36.115 "traddr": "10.0.0.1", 00:11:36.115 "trsvcid": "41778" 00:11:36.115 }, 00:11:36.115 "auth": { 00:11:36.115 "state": "completed", 00:11:36.115 "digest": "sha384", 00:11:36.115 "dhgroup": "ffdhe4096" 00:11:36.115 } 00:11:36.115 } 00:11:36.115 ]' 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:36.115 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.373 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.373 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.373 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.632 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:36.632 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.200 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.465 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.033 00:11:38.033 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.033 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.033 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.292 { 00:11:38.292 "cntlid": 81, 00:11:38.292 "qid": 0, 00:11:38.292 "state": "enabled", 00:11:38.292 "thread": "nvmf_tgt_poll_group_000", 00:11:38.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:38.292 "listen_address": { 00:11:38.292 "trtype": "TCP", 00:11:38.292 "adrfam": "IPv4", 00:11:38.292 "traddr": "10.0.0.3", 00:11:38.292 "trsvcid": "4420" 00:11:38.292 }, 00:11:38.292 "peer_address": { 00:11:38.292 "trtype": "TCP", 00:11:38.292 "adrfam": "IPv4", 00:11:38.292 "traddr": "10.0.0.1", 00:11:38.292 "trsvcid": "41816" 00:11:38.292 }, 00:11:38.292 "auth": { 00:11:38.292 "state": "completed", 00:11:38.292 "digest": "sha384", 00:11:38.292 "dhgroup": "ffdhe6144" 00:11:38.292 } 00:11:38.292 } 00:11:38.292 ]' 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.292 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.550 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:38.550 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:39.118 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.119 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:39.119 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.119 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.119 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.119 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.119 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.119 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.687 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.946 00:11:39.946 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.946 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.946 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.205 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.205 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.205 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.205 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.205 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.205 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.205 { 00:11:40.205 "cntlid": 83, 00:11:40.205 "qid": 0, 00:11:40.205 "state": "enabled", 00:11:40.205 "thread": "nvmf_tgt_poll_group_000", 00:11:40.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:40.205 "listen_address": { 00:11:40.205 "trtype": "TCP", 00:11:40.205 "adrfam": "IPv4", 00:11:40.205 "traddr": "10.0.0.3", 00:11:40.205 "trsvcid": "4420" 00:11:40.205 }, 00:11:40.205 "peer_address": { 00:11:40.205 "trtype": "TCP", 00:11:40.205 "adrfam": "IPv4", 00:11:40.205 "traddr": "10.0.0.1", 00:11:40.205 "trsvcid": "37140" 00:11:40.205 }, 00:11:40.205 "auth": { 00:11:40.205 "state": "completed", 00:11:40.205 "digest": "sha384", 00:11:40.205 "dhgroup": "ffdhe6144" 00:11:40.205 } 00:11:40.205 } 00:11:40.205 ]' 00:11:40.205 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.463 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.464 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.464 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.464 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.464 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.464 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.464 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.722 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:40.722 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:41.657 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.916 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.483 00:11:42.483 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.483 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.483 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.741 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.741 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.741 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.741 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.741 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.741 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.741 { 00:11:42.741 "cntlid": 85, 00:11:42.741 "qid": 0, 00:11:42.741 "state": "enabled", 00:11:42.741 "thread": "nvmf_tgt_poll_group_000", 00:11:42.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:42.741 "listen_address": { 00:11:42.741 "trtype": "TCP", 00:11:42.741 "adrfam": "IPv4", 00:11:42.741 "traddr": "10.0.0.3", 00:11:42.741 "trsvcid": "4420" 00:11:42.741 }, 00:11:42.741 "peer_address": { 00:11:42.741 "trtype": "TCP", 00:11:42.741 "adrfam": "IPv4", 00:11:42.741 "traddr": "10.0.0.1", 00:11:42.741 "trsvcid": "37158" 00:11:42.741 }, 00:11:42.741 "auth": { 00:11:42.741 "state": "completed", 00:11:42.741 "digest": "sha384", 00:11:42.741 "dhgroup": "ffdhe6144" 00:11:42.741 } 00:11:42.741 } 00:11:42.742 ]' 00:11:42.742 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.742 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.742 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.742 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:42.742 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.000 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.000 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.000 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.259 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:43.259 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:43.826 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.085 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.653 00:11:44.653 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.653 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.653 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.913 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.913 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.913 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.913 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.913 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.913 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.913 { 00:11:44.913 "cntlid": 87, 00:11:44.913 "qid": 0, 00:11:44.913 "state": "enabled", 00:11:44.913 "thread": "nvmf_tgt_poll_group_000", 00:11:44.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:44.913 "listen_address": { 00:11:44.913 "trtype": "TCP", 00:11:44.913 "adrfam": "IPv4", 00:11:44.913 "traddr": "10.0.0.3", 00:11:44.913 "trsvcid": "4420" 00:11:44.913 }, 00:11:44.913 "peer_address": { 00:11:44.913 "trtype": "TCP", 00:11:44.913 "adrfam": "IPv4", 00:11:44.913 "traddr": "10.0.0.1", 00:11:44.913 "trsvcid": "37188" 00:11:44.913 }, 00:11:44.913 "auth": { 00:11:44.913 "state": "completed", 00:11:44.913 "digest": "sha384", 00:11:44.913 "dhgroup": "ffdhe6144" 00:11:44.913 } 00:11:44.913 } 00:11:44.913 ]' 00:11:44.913 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.171 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.171 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.171 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:45.171 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.171 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.171 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.171 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.430 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:45.430 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:46.028 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.287 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.547 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.114 00:11:47.114 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.114 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.114 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.372 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.372 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.372 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.372 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.373 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.373 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.373 { 00:11:47.373 "cntlid": 89, 00:11:47.373 "qid": 0, 00:11:47.373 "state": "enabled", 00:11:47.373 "thread": "nvmf_tgt_poll_group_000", 00:11:47.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:47.373 "listen_address": { 00:11:47.373 "trtype": "TCP", 00:11:47.373 "adrfam": "IPv4", 00:11:47.373 "traddr": "10.0.0.3", 00:11:47.373 "trsvcid": "4420" 00:11:47.373 }, 00:11:47.373 "peer_address": { 00:11:47.373 "trtype": "TCP", 00:11:47.373 "adrfam": "IPv4", 00:11:47.373 "traddr": "10.0.0.1", 00:11:47.373 "trsvcid": "37208" 00:11:47.373 }, 00:11:47.373 "auth": { 00:11:47.373 "state": "completed", 00:11:47.373 "digest": "sha384", 00:11:47.373 "dhgroup": "ffdhe8192" 00:11:47.373 } 00:11:47.373 } 00:11:47.373 ]' 00:11:47.373 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.631 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.631 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.631 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.631 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.631 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.631 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.631 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.890 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:47.890 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.855 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.114 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.681 00:11:49.681 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.681 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.681 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.940 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.940 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.940 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.940 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.199 { 00:11:50.199 "cntlid": 91, 00:11:50.199 "qid": 0, 00:11:50.199 "state": "enabled", 00:11:50.199 "thread": "nvmf_tgt_poll_group_000", 00:11:50.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:50.199 "listen_address": { 00:11:50.199 "trtype": "TCP", 00:11:50.199 "adrfam": "IPv4", 00:11:50.199 "traddr": "10.0.0.3", 00:11:50.199 "trsvcid": "4420" 00:11:50.199 }, 00:11:50.199 "peer_address": { 00:11:50.199 "trtype": "TCP", 00:11:50.199 "adrfam": "IPv4", 00:11:50.199 "traddr": "10.0.0.1", 00:11:50.199 "trsvcid": "55740" 00:11:50.199 }, 00:11:50.199 "auth": { 00:11:50.199 "state": "completed", 00:11:50.199 "digest": "sha384", 00:11:50.199 "dhgroup": "ffdhe8192" 00:11:50.199 } 00:11:50.199 } 00:11:50.199 ]' 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.199 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.458 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:50.458 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:51.024 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.282 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.849 00:11:51.849 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.849 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.849 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.107 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.107 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.107 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.107 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.108 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.108 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.108 { 00:11:52.108 "cntlid": 93, 00:11:52.108 "qid": 0, 00:11:52.108 "state": "enabled", 00:11:52.108 "thread": "nvmf_tgt_poll_group_000", 00:11:52.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:52.108 "listen_address": { 00:11:52.108 "trtype": "TCP", 00:11:52.108 "adrfam": "IPv4", 00:11:52.108 "traddr": "10.0.0.3", 00:11:52.108 "trsvcid": "4420" 00:11:52.108 }, 00:11:52.108 "peer_address": { 00:11:52.108 "trtype": "TCP", 00:11:52.108 "adrfam": "IPv4", 00:11:52.108 "traddr": "10.0.0.1", 00:11:52.108 "trsvcid": "55770" 00:11:52.108 }, 00:11:52.108 "auth": { 00:11:52.108 "state": "completed", 00:11:52.108 "digest": "sha384", 00:11:52.108 "dhgroup": "ffdhe8192" 00:11:52.108 } 00:11:52.108 } 00:11:52.108 ]' 00:11:52.108 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.366 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.366 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.366 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.366 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.366 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.366 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.366 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.624 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:52.624 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:53.191 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.449 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.016 00:11:54.016 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.016 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.016 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.274 { 00:11:54.274 "cntlid": 95, 00:11:54.274 "qid": 0, 00:11:54.274 "state": "enabled", 00:11:54.274 "thread": "nvmf_tgt_poll_group_000", 00:11:54.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:54.274 "listen_address": { 00:11:54.274 "trtype": "TCP", 00:11:54.274 "adrfam": "IPv4", 00:11:54.274 "traddr": "10.0.0.3", 00:11:54.274 "trsvcid": "4420" 00:11:54.274 }, 00:11:54.274 "peer_address": { 00:11:54.274 "trtype": "TCP", 00:11:54.274 "adrfam": "IPv4", 00:11:54.274 "traddr": "10.0.0.1", 00:11:54.274 "trsvcid": "55796" 00:11:54.274 }, 00:11:54.274 "auth": { 00:11:54.274 "state": "completed", 00:11:54.274 "digest": "sha384", 00:11:54.274 "dhgroup": "ffdhe8192" 00:11:54.274 } 00:11:54.274 } 00:11:54.274 ]' 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.274 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.533 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:54.533 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.533 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.533 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.533 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.791 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:54.791 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.357 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.616 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.182 00:11:56.182 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.182 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.182 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.441 { 00:11:56.441 "cntlid": 97, 00:11:56.441 "qid": 0, 00:11:56.441 "state": "enabled", 00:11:56.441 "thread": "nvmf_tgt_poll_group_000", 00:11:56.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:56.441 "listen_address": { 00:11:56.441 "trtype": "TCP", 00:11:56.441 "adrfam": "IPv4", 00:11:56.441 "traddr": "10.0.0.3", 00:11:56.441 "trsvcid": "4420" 00:11:56.441 }, 00:11:56.441 "peer_address": { 00:11:56.441 "trtype": "TCP", 00:11:56.441 "adrfam": "IPv4", 00:11:56.441 "traddr": "10.0.0.1", 00:11:56.441 "trsvcid": "55812" 00:11:56.441 }, 00:11:56.441 "auth": { 00:11:56.441 "state": "completed", 00:11:56.441 "digest": "sha512", 00:11:56.441 "dhgroup": "null" 00:11:56.441 } 00:11:56.441 } 00:11:56.441 ]' 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.441 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.441 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:56.441 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.441 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.441 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.441 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.008 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:57.008 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:57.577 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.835 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.094 00:11:58.094 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.094 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.094 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.352 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.352 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.352 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.611 { 00:11:58.611 "cntlid": 99, 00:11:58.611 "qid": 0, 00:11:58.611 "state": "enabled", 00:11:58.611 "thread": "nvmf_tgt_poll_group_000", 00:11:58.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:11:58.611 "listen_address": { 00:11:58.611 "trtype": "TCP", 00:11:58.611 "adrfam": "IPv4", 00:11:58.611 "traddr": "10.0.0.3", 00:11:58.611 "trsvcid": "4420" 00:11:58.611 }, 00:11:58.611 "peer_address": { 00:11:58.611 "trtype": "TCP", 00:11:58.611 "adrfam": "IPv4", 00:11:58.611 "traddr": "10.0.0.1", 00:11:58.611 "trsvcid": "55840" 00:11:58.611 }, 00:11:58.611 "auth": { 00:11:58.611 "state": "completed", 00:11:58.611 "digest": "sha512", 00:11:58.611 "dhgroup": "null" 00:11:58.611 } 00:11:58.611 } 00:11:58.611 ]' 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.611 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.869 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:58.869 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:11:59.436 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.436 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:11:59.436 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.436 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.694 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.694 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.694 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:59.694 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.953 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.211 00:12:00.211 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.211 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.211 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.470 { 00:12:00.470 "cntlid": 101, 00:12:00.470 "qid": 0, 00:12:00.470 "state": "enabled", 00:12:00.470 "thread": "nvmf_tgt_poll_group_000", 00:12:00.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:00.470 "listen_address": { 00:12:00.470 "trtype": "TCP", 00:12:00.470 "adrfam": "IPv4", 00:12:00.470 "traddr": "10.0.0.3", 00:12:00.470 "trsvcid": "4420" 00:12:00.470 }, 00:12:00.470 "peer_address": { 00:12:00.470 "trtype": "TCP", 00:12:00.470 "adrfam": "IPv4", 00:12:00.470 "traddr": "10.0.0.1", 00:12:00.470 "trsvcid": "41254" 00:12:00.470 }, 00:12:00.470 "auth": { 00:12:00.470 "state": "completed", 00:12:00.470 "digest": "sha512", 00:12:00.470 "dhgroup": "null" 00:12:00.470 } 00:12:00.470 } 00:12:00.470 ]' 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:00.470 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.729 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.729 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.729 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.987 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:00.987 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:01.554 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.812 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.071 00:12:02.071 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.071 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.071 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.649 { 00:12:02.649 "cntlid": 103, 00:12:02.649 "qid": 0, 00:12:02.649 "state": "enabled", 00:12:02.649 "thread": "nvmf_tgt_poll_group_000", 00:12:02.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:02.649 "listen_address": { 00:12:02.649 "trtype": "TCP", 00:12:02.649 "adrfam": "IPv4", 00:12:02.649 "traddr": "10.0.0.3", 00:12:02.649 "trsvcid": "4420" 00:12:02.649 }, 00:12:02.649 "peer_address": { 00:12:02.649 "trtype": "TCP", 00:12:02.649 "adrfam": "IPv4", 00:12:02.649 "traddr": "10.0.0.1", 00:12:02.649 "trsvcid": "41278" 00:12:02.649 }, 00:12:02.649 "auth": { 00:12:02.649 "state": "completed", 00:12:02.649 "digest": "sha512", 00:12:02.649 "dhgroup": "null" 00:12:02.649 } 00:12:02.649 } 00:12:02.649 ]' 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.649 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.928 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:02.928 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.862 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.429 00:12:04.429 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.429 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.429 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.688 { 00:12:04.688 "cntlid": 105, 00:12:04.688 "qid": 0, 00:12:04.688 "state": "enabled", 00:12:04.688 "thread": "nvmf_tgt_poll_group_000", 00:12:04.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:04.688 "listen_address": { 00:12:04.688 "trtype": "TCP", 00:12:04.688 "adrfam": "IPv4", 00:12:04.688 "traddr": "10.0.0.3", 00:12:04.688 "trsvcid": "4420" 00:12:04.688 }, 00:12:04.688 "peer_address": { 00:12:04.688 "trtype": "TCP", 00:12:04.688 "adrfam": "IPv4", 00:12:04.688 "traddr": "10.0.0.1", 00:12:04.688 "trsvcid": "41318" 00:12:04.688 }, 00:12:04.688 "auth": { 00:12:04.688 "state": "completed", 00:12:04.688 "digest": "sha512", 00:12:04.688 "dhgroup": "ffdhe2048" 00:12:04.688 } 00:12:04.688 } 00:12:04.688 ]' 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.688 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.947 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:04.947 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:05.514 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.773 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.340 00:12:06.340 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.340 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.340 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.598 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.598 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.599 { 00:12:06.599 "cntlid": 107, 00:12:06.599 "qid": 0, 00:12:06.599 "state": "enabled", 00:12:06.599 "thread": "nvmf_tgt_poll_group_000", 00:12:06.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:06.599 "listen_address": { 00:12:06.599 "trtype": "TCP", 00:12:06.599 "adrfam": "IPv4", 00:12:06.599 "traddr": "10.0.0.3", 00:12:06.599 "trsvcid": "4420" 00:12:06.599 }, 00:12:06.599 "peer_address": { 00:12:06.599 "trtype": "TCP", 00:12:06.599 "adrfam": "IPv4", 00:12:06.599 "traddr": "10.0.0.1", 00:12:06.599 "trsvcid": "41342" 00:12:06.599 }, 00:12:06.599 "auth": { 00:12:06.599 "state": "completed", 00:12:06.599 "digest": "sha512", 00:12:06.599 "dhgroup": "ffdhe2048" 00:12:06.599 } 00:12:06.599 } 00:12:06.599 ]' 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.599 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.856 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:06.856 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:07.794 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.102 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.360 00:12:08.360 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.360 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.360 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.619 { 00:12:08.619 "cntlid": 109, 00:12:08.619 "qid": 0, 00:12:08.619 "state": "enabled", 00:12:08.619 "thread": "nvmf_tgt_poll_group_000", 00:12:08.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:08.619 "listen_address": { 00:12:08.619 "trtype": "TCP", 00:12:08.619 "adrfam": "IPv4", 00:12:08.619 "traddr": "10.0.0.3", 00:12:08.619 "trsvcid": "4420" 00:12:08.619 }, 00:12:08.619 "peer_address": { 00:12:08.619 "trtype": "TCP", 00:12:08.619 "adrfam": "IPv4", 00:12:08.619 "traddr": "10.0.0.1", 00:12:08.619 "trsvcid": "41368" 00:12:08.619 }, 00:12:08.619 "auth": { 00:12:08.619 "state": "completed", 00:12:08.619 "digest": "sha512", 00:12:08.619 "dhgroup": "ffdhe2048" 00:12:08.619 } 00:12:08.619 } 00:12:08.619 ]' 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.619 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.186 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:09.186 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:09.752 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.753 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:09.753 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.753 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.753 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.753 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.753 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:09.753 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.012 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.270 00:12:10.528 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.528 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.528 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.786 { 00:12:10.786 "cntlid": 111, 00:12:10.786 "qid": 0, 00:12:10.786 "state": "enabled", 00:12:10.786 "thread": "nvmf_tgt_poll_group_000", 00:12:10.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:10.786 "listen_address": { 00:12:10.786 "trtype": "TCP", 00:12:10.786 "adrfam": "IPv4", 00:12:10.786 "traddr": "10.0.0.3", 00:12:10.786 "trsvcid": "4420" 00:12:10.786 }, 00:12:10.786 "peer_address": { 00:12:10.786 "trtype": "TCP", 00:12:10.786 "adrfam": "IPv4", 00:12:10.786 "traddr": "10.0.0.1", 00:12:10.786 "trsvcid": "36568" 00:12:10.786 }, 00:12:10.786 "auth": { 00:12:10.786 "state": "completed", 00:12:10.786 "digest": "sha512", 00:12:10.786 "dhgroup": "ffdhe2048" 00:12:10.786 } 00:12:10.786 } 00:12:10.786 ]' 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.786 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.045 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:11.045 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.613 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.179 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.438 00:12:12.438 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.438 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.438 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.697 { 00:12:12.697 "cntlid": 113, 00:12:12.697 "qid": 0, 00:12:12.697 "state": "enabled", 00:12:12.697 "thread": "nvmf_tgt_poll_group_000", 00:12:12.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:12.697 "listen_address": { 00:12:12.697 "trtype": "TCP", 00:12:12.697 "adrfam": "IPv4", 00:12:12.697 "traddr": "10.0.0.3", 00:12:12.697 "trsvcid": "4420" 00:12:12.697 }, 00:12:12.697 "peer_address": { 00:12:12.697 "trtype": "TCP", 00:12:12.697 "adrfam": "IPv4", 00:12:12.697 "traddr": "10.0.0.1", 00:12:12.697 "trsvcid": "36598" 00:12:12.697 }, 00:12:12.697 "auth": { 00:12:12.697 "state": "completed", 00:12:12.697 "digest": "sha512", 00:12:12.697 "dhgroup": "ffdhe3072" 00:12:12.697 } 00:12:12.697 } 00:12:12.697 ]' 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.697 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.264 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:13.264 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:13.831 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.090 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.657 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.657 { 00:12:14.657 "cntlid": 115, 00:12:14.657 "qid": 0, 00:12:14.657 "state": "enabled", 00:12:14.657 "thread": "nvmf_tgt_poll_group_000", 00:12:14.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:14.657 "listen_address": { 00:12:14.657 "trtype": "TCP", 00:12:14.657 "adrfam": "IPv4", 00:12:14.657 "traddr": "10.0.0.3", 00:12:14.657 "trsvcid": "4420" 00:12:14.657 }, 00:12:14.657 "peer_address": { 00:12:14.657 "trtype": "TCP", 00:12:14.657 "adrfam": "IPv4", 00:12:14.657 "traddr": "10.0.0.1", 00:12:14.657 "trsvcid": "36616" 00:12:14.657 }, 00:12:14.657 "auth": { 00:12:14.657 "state": "completed", 00:12:14.657 "digest": "sha512", 00:12:14.657 "dhgroup": "ffdhe3072" 00:12:14.657 } 00:12:14.657 } 00:12:14.657 ]' 00:12:14.657 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.918 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.918 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.918 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.918 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.918 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.918 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.918 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.177 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:15.177 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:15.744 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.744 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:15.744 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.744 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.003 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.003 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.003 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:16.003 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.262 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.521 00:12:16.521 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.521 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.521 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.780 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.780 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.780 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.780 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.780 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.780 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.780 { 00:12:16.780 "cntlid": 117, 00:12:16.780 "qid": 0, 00:12:16.780 "state": "enabled", 00:12:16.780 "thread": "nvmf_tgt_poll_group_000", 00:12:16.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:16.780 "listen_address": { 00:12:16.780 "trtype": "TCP", 00:12:16.780 "adrfam": "IPv4", 00:12:16.780 "traddr": "10.0.0.3", 00:12:16.780 "trsvcid": "4420" 00:12:16.780 }, 00:12:16.780 "peer_address": { 00:12:16.780 "trtype": "TCP", 00:12:16.780 "adrfam": "IPv4", 00:12:16.780 "traddr": "10.0.0.1", 00:12:16.780 "trsvcid": "36656" 00:12:16.780 }, 00:12:16.780 "auth": { 00:12:16.780 "state": "completed", 00:12:16.780 "digest": "sha512", 00:12:16.780 "dhgroup": "ffdhe3072" 00:12:16.780 } 00:12:16.780 } 00:12:16.780 ]' 00:12:16.780 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.038 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.038 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.038 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:17.038 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.038 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.038 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.038 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.296 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:17.296 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:17.862 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.120 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.686 00:12:18.686 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.686 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.686 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.945 { 00:12:18.945 "cntlid": 119, 00:12:18.945 "qid": 0, 00:12:18.945 "state": "enabled", 00:12:18.945 "thread": "nvmf_tgt_poll_group_000", 00:12:18.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:18.945 "listen_address": { 00:12:18.945 "trtype": "TCP", 00:12:18.945 "adrfam": "IPv4", 00:12:18.945 "traddr": "10.0.0.3", 00:12:18.945 "trsvcid": "4420" 00:12:18.945 }, 00:12:18.945 "peer_address": { 00:12:18.945 "trtype": "TCP", 00:12:18.945 "adrfam": "IPv4", 00:12:18.945 "traddr": "10.0.0.1", 00:12:18.945 "trsvcid": "36678" 00:12:18.945 }, 00:12:18.945 "auth": { 00:12:18.945 "state": "completed", 00:12:18.945 "digest": "sha512", 00:12:18.945 "dhgroup": "ffdhe3072" 00:12:18.945 } 00:12:18.945 } 00:12:18.945 ]' 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.945 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.204 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:19.204 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:20.139 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.398 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.657 00:12:20.657 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.657 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.657 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.916 { 00:12:20.916 "cntlid": 121, 00:12:20.916 "qid": 0, 00:12:20.916 "state": "enabled", 00:12:20.916 "thread": "nvmf_tgt_poll_group_000", 00:12:20.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:20.916 "listen_address": { 00:12:20.916 "trtype": "TCP", 00:12:20.916 "adrfam": "IPv4", 00:12:20.916 "traddr": "10.0.0.3", 00:12:20.916 "trsvcid": "4420" 00:12:20.916 }, 00:12:20.916 "peer_address": { 00:12:20.916 "trtype": "TCP", 00:12:20.916 "adrfam": "IPv4", 00:12:20.916 "traddr": "10.0.0.1", 00:12:20.916 "trsvcid": "36102" 00:12:20.916 }, 00:12:20.916 "auth": { 00:12:20.916 "state": "completed", 00:12:20.916 "digest": "sha512", 00:12:20.916 "dhgroup": "ffdhe4096" 00:12:20.916 } 00:12:20.916 } 00:12:20.916 ]' 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.916 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.195 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:21.195 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.195 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.195 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.195 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.482 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:21.482 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.050 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.309 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.877 00:12:22.877 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.877 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.877 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.137 { 00:12:23.137 "cntlid": 123, 00:12:23.137 "qid": 0, 00:12:23.137 "state": "enabled", 00:12:23.137 "thread": "nvmf_tgt_poll_group_000", 00:12:23.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:23.137 "listen_address": { 00:12:23.137 "trtype": "TCP", 00:12:23.137 "adrfam": "IPv4", 00:12:23.137 "traddr": "10.0.0.3", 00:12:23.137 "trsvcid": "4420" 00:12:23.137 }, 00:12:23.137 "peer_address": { 00:12:23.137 "trtype": "TCP", 00:12:23.137 "adrfam": "IPv4", 00:12:23.137 "traddr": "10.0.0.1", 00:12:23.137 "trsvcid": "36122" 00:12:23.137 }, 00:12:23.137 "auth": { 00:12:23.137 "state": "completed", 00:12:23.137 "digest": "sha512", 00:12:23.137 "dhgroup": "ffdhe4096" 00:12:23.137 } 00:12:23.137 } 00:12:23.137 ]' 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:23.137 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.397 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.397 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.397 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.656 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:23.656 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:24.225 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.484 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.484 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.484 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.484 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.485 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.744 00:12:24.744 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.744 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.744 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.003 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.003 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.003 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.003 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.262 { 00:12:25.262 "cntlid": 125, 00:12:25.262 "qid": 0, 00:12:25.262 "state": "enabled", 00:12:25.262 "thread": "nvmf_tgt_poll_group_000", 00:12:25.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:25.262 "listen_address": { 00:12:25.262 "trtype": "TCP", 00:12:25.262 "adrfam": "IPv4", 00:12:25.262 "traddr": "10.0.0.3", 00:12:25.262 "trsvcid": "4420" 00:12:25.262 }, 00:12:25.262 "peer_address": { 00:12:25.262 "trtype": "TCP", 00:12:25.262 "adrfam": "IPv4", 00:12:25.262 "traddr": "10.0.0.1", 00:12:25.262 "trsvcid": "36146" 00:12:25.262 }, 00:12:25.262 "auth": { 00:12:25.262 "state": "completed", 00:12:25.262 "digest": "sha512", 00:12:25.262 "dhgroup": "ffdhe4096" 00:12:25.262 } 00:12:25.262 } 00:12:25.262 ]' 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.262 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.521 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:25.521 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:26.457 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:26.716 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:26.716 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.716 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.716 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:26.716 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:26.716 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.717 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:26.717 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.717 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.717 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:26.717 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.717 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.975 00:12:26.976 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.976 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.976 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.252 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.252 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.252 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.252 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.525 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.525 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.525 { 00:12:27.525 "cntlid": 127, 00:12:27.525 "qid": 0, 00:12:27.525 "state": "enabled", 00:12:27.525 "thread": "nvmf_tgt_poll_group_000", 00:12:27.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:27.525 "listen_address": { 00:12:27.525 "trtype": "TCP", 00:12:27.525 "adrfam": "IPv4", 00:12:27.525 "traddr": "10.0.0.3", 00:12:27.525 "trsvcid": "4420" 00:12:27.525 }, 00:12:27.525 "peer_address": { 00:12:27.525 "trtype": "TCP", 00:12:27.525 "adrfam": "IPv4", 00:12:27.525 "traddr": "10.0.0.1", 00:12:27.525 "trsvcid": "36154" 00:12:27.525 }, 00:12:27.525 "auth": { 00:12:27.525 "state": "completed", 00:12:27.525 "digest": "sha512", 00:12:27.525 "dhgroup": "ffdhe4096" 00:12:27.525 } 00:12:27.525 } 00:12:27.525 ]' 00:12:27.525 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.525 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.525 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.525 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:27.525 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.525 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.525 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.525 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.784 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:27.784 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:28.352 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.352 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:28.352 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.352 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.611 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.180 00:12:29.180 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.180 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.180 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.440 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.440 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.440 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.440 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.699 { 00:12:29.699 "cntlid": 129, 00:12:29.699 "qid": 0, 00:12:29.699 "state": "enabled", 00:12:29.699 "thread": "nvmf_tgt_poll_group_000", 00:12:29.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:29.699 "listen_address": { 00:12:29.699 "trtype": "TCP", 00:12:29.699 "adrfam": "IPv4", 00:12:29.699 "traddr": "10.0.0.3", 00:12:29.699 "trsvcid": "4420" 00:12:29.699 }, 00:12:29.699 "peer_address": { 00:12:29.699 "trtype": "TCP", 00:12:29.699 "adrfam": "IPv4", 00:12:29.699 "traddr": "10.0.0.1", 00:12:29.699 "trsvcid": "56646" 00:12:29.699 }, 00:12:29.699 "auth": { 00:12:29.699 "state": "completed", 00:12:29.699 "digest": "sha512", 00:12:29.699 "dhgroup": "ffdhe6144" 00:12:29.699 } 00:12:29.699 } 00:12:29.699 ]' 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.699 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.958 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:29.958 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.526 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.785 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.353 00:12:31.353 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.353 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.353 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.612 { 00:12:31.612 "cntlid": 131, 00:12:31.612 "qid": 0, 00:12:31.612 "state": "enabled", 00:12:31.612 "thread": "nvmf_tgt_poll_group_000", 00:12:31.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:31.612 "listen_address": { 00:12:31.612 "trtype": "TCP", 00:12:31.612 "adrfam": "IPv4", 00:12:31.612 "traddr": "10.0.0.3", 00:12:31.612 "trsvcid": "4420" 00:12:31.612 }, 00:12:31.612 "peer_address": { 00:12:31.612 "trtype": "TCP", 00:12:31.612 "adrfam": "IPv4", 00:12:31.612 "traddr": "10.0.0.1", 00:12:31.612 "trsvcid": "56664" 00:12:31.612 }, 00:12:31.612 "auth": { 00:12:31.612 "state": "completed", 00:12:31.612 "digest": "sha512", 00:12:31.612 "dhgroup": "ffdhe6144" 00:12:31.612 } 00:12:31.612 } 00:12:31.612 ]' 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.612 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.871 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.871 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.871 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.871 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.871 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.131 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:32.131 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:32.698 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.266 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.524 00:12:33.524 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.524 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.524 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.783 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.783 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.783 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.783 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.783 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.783 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.783 { 00:12:33.783 "cntlid": 133, 00:12:33.783 "qid": 0, 00:12:33.783 "state": "enabled", 00:12:33.783 "thread": "nvmf_tgt_poll_group_000", 00:12:33.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:33.783 "listen_address": { 00:12:33.783 "trtype": "TCP", 00:12:33.783 "adrfam": "IPv4", 00:12:33.783 "traddr": "10.0.0.3", 00:12:33.783 "trsvcid": "4420" 00:12:33.783 }, 00:12:33.783 "peer_address": { 00:12:33.783 "trtype": "TCP", 00:12:33.783 "adrfam": "IPv4", 00:12:33.783 "traddr": "10.0.0.1", 00:12:33.783 "trsvcid": "56680" 00:12:33.783 }, 00:12:33.783 "auth": { 00:12:33.783 "state": "completed", 00:12:33.783 "digest": "sha512", 00:12:33.783 "dhgroup": "ffdhe6144" 00:12:33.783 } 00:12:33.783 } 00:12:33.783 ]' 00:12:33.784 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.043 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.043 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.043 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:34.043 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.043 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.043 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.043 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.301 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:34.301 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.238 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.805 00:12:35.805 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.805 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.805 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.066 { 00:12:36.066 "cntlid": 135, 00:12:36.066 "qid": 0, 00:12:36.066 "state": "enabled", 00:12:36.066 "thread": "nvmf_tgt_poll_group_000", 00:12:36.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:36.066 "listen_address": { 00:12:36.066 "trtype": "TCP", 00:12:36.066 "adrfam": "IPv4", 00:12:36.066 "traddr": "10.0.0.3", 00:12:36.066 "trsvcid": "4420" 00:12:36.066 }, 00:12:36.066 "peer_address": { 00:12:36.066 "trtype": "TCP", 00:12:36.066 "adrfam": "IPv4", 00:12:36.066 "traddr": "10.0.0.1", 00:12:36.066 "trsvcid": "56708" 00:12:36.066 }, 00:12:36.066 "auth": { 00:12:36.066 "state": "completed", 00:12:36.066 "digest": "sha512", 00:12:36.066 "dhgroup": "ffdhe6144" 00:12:36.066 } 00:12:36.066 } 00:12:36.066 ]' 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:36.066 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.325 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.325 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.325 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.583 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:36.583 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.149 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.408 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.975 00:12:37.975 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.975 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.975 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.234 { 00:12:38.234 "cntlid": 137, 00:12:38.234 "qid": 0, 00:12:38.234 "state": "enabled", 00:12:38.234 "thread": "nvmf_tgt_poll_group_000", 00:12:38.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:38.234 "listen_address": { 00:12:38.234 "trtype": "TCP", 00:12:38.234 "adrfam": "IPv4", 00:12:38.234 "traddr": "10.0.0.3", 00:12:38.234 "trsvcid": "4420" 00:12:38.234 }, 00:12:38.234 "peer_address": { 00:12:38.234 "trtype": "TCP", 00:12:38.234 "adrfam": "IPv4", 00:12:38.234 "traddr": "10.0.0.1", 00:12:38.234 "trsvcid": "56744" 00:12:38.234 }, 00:12:38.234 "auth": { 00:12:38.234 "state": "completed", 00:12:38.234 "digest": "sha512", 00:12:38.234 "dhgroup": "ffdhe8192" 00:12:38.234 } 00:12:38.234 } 00:12:38.234 ]' 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.234 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.493 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.493 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.493 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.493 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.493 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.493 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.752 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:38.752 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.320 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.578 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:39.578 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.578 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.578 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:39.578 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:39.578 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.579 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.579 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.579 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.579 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.579 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.579 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.579 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.145 00:12:40.145 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.145 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.145 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.404 { 00:12:40.404 "cntlid": 139, 00:12:40.404 "qid": 0, 00:12:40.404 "state": "enabled", 00:12:40.404 "thread": "nvmf_tgt_poll_group_000", 00:12:40.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:40.404 "listen_address": { 00:12:40.404 "trtype": "TCP", 00:12:40.404 "adrfam": "IPv4", 00:12:40.404 "traddr": "10.0.0.3", 00:12:40.404 "trsvcid": "4420" 00:12:40.404 }, 00:12:40.404 "peer_address": { 00:12:40.404 "trtype": "TCP", 00:12:40.404 "adrfam": "IPv4", 00:12:40.404 "traddr": "10.0.0.1", 00:12:40.404 "trsvcid": "37676" 00:12:40.404 }, 00:12:40.404 "auth": { 00:12:40.404 "state": "completed", 00:12:40.404 "digest": "sha512", 00:12:40.404 "dhgroup": "ffdhe8192" 00:12:40.404 } 00:12:40.404 } 00:12:40.404 ]' 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.404 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.663 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.663 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.663 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.663 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.663 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.921 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:40.921 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: --dhchap-ctrl-secret DHHC-1:02:MGYxNjg3ZTFkZTc3YThjNDVjMjY1NjFlMzQ5OWExMWFkNDVhOWViODZlYzExNDliE64iCA==: 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:41.488 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.747 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.313 00:12:42.572 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.572 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.572 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.927 { 00:12:42.927 "cntlid": 141, 00:12:42.927 "qid": 0, 00:12:42.927 "state": "enabled", 00:12:42.927 "thread": "nvmf_tgt_poll_group_000", 00:12:42.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:42.927 "listen_address": { 00:12:42.927 "trtype": "TCP", 00:12:42.927 "adrfam": "IPv4", 00:12:42.927 "traddr": "10.0.0.3", 00:12:42.927 "trsvcid": "4420" 00:12:42.927 }, 00:12:42.927 "peer_address": { 00:12:42.927 "trtype": "TCP", 00:12:42.927 "adrfam": "IPv4", 00:12:42.927 "traddr": "10.0.0.1", 00:12:42.927 "trsvcid": "37690" 00:12:42.927 }, 00:12:42.927 "auth": { 00:12:42.927 "state": "completed", 00:12:42.927 "digest": "sha512", 00:12:42.927 "dhgroup": "ffdhe8192" 00:12:42.927 } 00:12:42.927 } 00:12:42.927 ]' 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.927 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.187 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:43.187 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:01:YmM3MjliMDliZDg5YzU4MDVhZjNlM2YxZTczZTNiMWbphZn4: 00:12:43.753 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.753 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:43.753 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.754 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.754 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.754 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.754 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:43.754 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.012 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.578 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.837 { 00:12:44.837 "cntlid": 143, 00:12:44.837 "qid": 0, 00:12:44.837 "state": "enabled", 00:12:44.837 "thread": "nvmf_tgt_poll_group_000", 00:12:44.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:44.837 "listen_address": { 00:12:44.837 "trtype": "TCP", 00:12:44.837 "adrfam": "IPv4", 00:12:44.837 "traddr": "10.0.0.3", 00:12:44.837 "trsvcid": "4420" 00:12:44.837 }, 00:12:44.837 "peer_address": { 00:12:44.837 "trtype": "TCP", 00:12:44.837 "adrfam": "IPv4", 00:12:44.837 "traddr": "10.0.0.1", 00:12:44.837 "trsvcid": "37718" 00:12:44.837 }, 00:12:44.837 "auth": { 00:12:44.837 "state": "completed", 00:12:44.837 "digest": "sha512", 00:12:44.837 "dhgroup": "ffdhe8192" 00:12:44.837 } 00:12:44.837 } 00:12:44.837 ]' 00:12:44.837 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.095 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.095 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.095 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.095 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.095 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.095 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.095 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.353 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:45.353 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:45.919 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:46.177 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:46.177 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.177 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:46.177 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:46.177 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.178 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.744 00:12:46.744 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.744 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.744 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.002 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.002 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.002 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.002 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.002 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.002 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.002 { 00:12:47.002 "cntlid": 145, 00:12:47.002 "qid": 0, 00:12:47.002 "state": "enabled", 00:12:47.002 "thread": "nvmf_tgt_poll_group_000", 00:12:47.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:47.002 "listen_address": { 00:12:47.002 "trtype": "TCP", 00:12:47.002 "adrfam": "IPv4", 00:12:47.002 "traddr": "10.0.0.3", 00:12:47.002 "trsvcid": "4420" 00:12:47.002 }, 00:12:47.002 "peer_address": { 00:12:47.002 "trtype": "TCP", 00:12:47.002 "adrfam": "IPv4", 00:12:47.002 "traddr": "10.0.0.1", 00:12:47.002 "trsvcid": "37752" 00:12:47.002 }, 00:12:47.002 "auth": { 00:12:47.002 "state": "completed", 00:12:47.002 "digest": "sha512", 00:12:47.002 "dhgroup": "ffdhe8192" 00:12:47.002 } 00:12:47.002 } 00:12:47.002 ]' 00:12:47.002 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.003 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.003 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.261 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.261 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.261 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.261 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.261 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.519 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:47.519 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:00:NDgwNTNmZDk1MmI5MjQwOGU0MmYzNDk4M2MwMzBkYjczYmEwZGYwYzcwYzA4MDVjh0sX8Q==: --dhchap-ctrl-secret DHHC-1:03:OGE1OTc3NzhmZmQ2ODUwOTZkMTgzOWJiY2NkYmZjNDc5NmFjYzZkZjA0ODFiYzVmMDkyODUyYWEzNmU4Nzk2ZhqYtYg=: 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:48.096 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:48.664 request: 00:12:48.664 { 00:12:48.664 "name": "nvme0", 00:12:48.664 "trtype": "tcp", 00:12:48.664 "traddr": "10.0.0.3", 00:12:48.664 "adrfam": "ipv4", 00:12:48.664 "trsvcid": "4420", 00:12:48.664 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:48.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:48.664 "prchk_reftag": false, 00:12:48.664 "prchk_guard": false, 00:12:48.664 "hdgst": false, 00:12:48.665 "ddgst": false, 00:12:48.665 "dhchap_key": "key2", 00:12:48.665 "allow_unrecognized_csi": false, 00:12:48.665 "method": "bdev_nvme_attach_controller", 00:12:48.665 "req_id": 1 00:12:48.665 } 00:12:48.665 Got JSON-RPC error response 00:12:48.665 response: 00:12:48.665 { 00:12:48.665 "code": -5, 00:12:48.665 "message": "Input/output error" 00:12:48.665 } 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:48.665 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:49.233 request: 00:12:49.233 { 00:12:49.233 "name": "nvme0", 00:12:49.233 "trtype": "tcp", 00:12:49.233 "traddr": "10.0.0.3", 00:12:49.233 "adrfam": "ipv4", 00:12:49.233 "trsvcid": "4420", 00:12:49.233 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:49.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:49.233 "prchk_reftag": false, 00:12:49.233 "prchk_guard": false, 00:12:49.233 "hdgst": false, 00:12:49.233 "ddgst": false, 00:12:49.233 "dhchap_key": "key1", 00:12:49.233 "dhchap_ctrlr_key": "ckey2", 00:12:49.233 "allow_unrecognized_csi": false, 00:12:49.233 "method": "bdev_nvme_attach_controller", 00:12:49.233 "req_id": 1 00:12:49.233 } 00:12:49.233 Got JSON-RPC error response 00:12:49.233 response: 00:12:49.233 { 00:12:49.233 "code": -5, 00:12:49.233 "message": "Input/output error" 00:12:49.233 } 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.492 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.493 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.061 request: 00:12:50.061 { 00:12:50.061 "name": "nvme0", 00:12:50.061 "trtype": "tcp", 00:12:50.061 "traddr": "10.0.0.3", 00:12:50.061 "adrfam": "ipv4", 00:12:50.061 "trsvcid": "4420", 00:12:50.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:50.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:50.061 "prchk_reftag": false, 00:12:50.061 "prchk_guard": false, 00:12:50.061 "hdgst": false, 00:12:50.061 "ddgst": false, 00:12:50.061 "dhchap_key": "key1", 00:12:50.061 "dhchap_ctrlr_key": "ckey1", 00:12:50.061 "allow_unrecognized_csi": false, 00:12:50.061 "method": "bdev_nvme_attach_controller", 00:12:50.061 "req_id": 1 00:12:50.061 } 00:12:50.061 Got JSON-RPC error response 00:12:50.061 response: 00:12:50.061 { 00:12:50.061 "code": -5, 00:12:50.061 "message": "Input/output error" 00:12:50.061 } 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67709 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67709 ']' 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67709 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67709 00:12:50.061 killing process with pid 67709 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67709' 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67709 00:12:50.061 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67709 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=70776 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 70776 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70776 ']' 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.320 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70776 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70776 ']' 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.515 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.515 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:51.515 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:51.515 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.515 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.774 null0 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.da4 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1yh ]] 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1yh 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.774 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oN1 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.STJ ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.STJ 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0sR 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.JVd ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JVd 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uS5 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.775 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.711 nvme0n1 00:12:52.711 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.711 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.711 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.970 { 00:12:52.970 "cntlid": 1, 00:12:52.970 "qid": 0, 00:12:52.970 "state": "enabled", 00:12:52.970 "thread": "nvmf_tgt_poll_group_000", 00:12:52.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:52.970 "listen_address": { 00:12:52.970 "trtype": "TCP", 00:12:52.970 "adrfam": "IPv4", 00:12:52.970 "traddr": "10.0.0.3", 00:12:52.970 "trsvcid": "4420" 00:12:52.970 }, 00:12:52.970 "peer_address": { 00:12:52.970 "trtype": "TCP", 00:12:52.970 "adrfam": "IPv4", 00:12:52.970 "traddr": "10.0.0.1", 00:12:52.970 "trsvcid": "41646" 00:12:52.970 }, 00:12:52.970 "auth": { 00:12:52.970 "state": "completed", 00:12:52.970 "digest": "sha512", 00:12:52.970 "dhgroup": "ffdhe8192" 00:12:52.970 } 00:12:52.970 } 00:12:52.970 ]' 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.970 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.228 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.228 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.228 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.228 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.228 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.526 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:53.526 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key3 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:54.111 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.678 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.678 request: 00:12:54.678 { 00:12:54.678 "name": "nvme0", 00:12:54.678 "trtype": "tcp", 00:12:54.678 "traddr": "10.0.0.3", 00:12:54.678 "adrfam": "ipv4", 00:12:54.678 "trsvcid": "4420", 00:12:54.678 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:54.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:54.678 "prchk_reftag": false, 00:12:54.678 "prchk_guard": false, 00:12:54.678 "hdgst": false, 00:12:54.678 "ddgst": false, 00:12:54.678 "dhchap_key": "key3", 00:12:54.678 "allow_unrecognized_csi": false, 00:12:54.678 "method": "bdev_nvme_attach_controller", 00:12:54.678 "req_id": 1 00:12:54.678 } 00:12:54.678 Got JSON-RPC error response 00:12:54.678 response: 00:12:54.678 { 00:12:54.678 "code": -5, 00:12:54.678 "message": "Input/output error" 00:12:54.678 } 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:54.936 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.194 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.453 request: 00:12:55.453 { 00:12:55.453 "name": "nvme0", 00:12:55.453 "trtype": "tcp", 00:12:55.453 "traddr": "10.0.0.3", 00:12:55.453 "adrfam": "ipv4", 00:12:55.453 "trsvcid": "4420", 00:12:55.453 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:55.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:55.453 "prchk_reftag": false, 00:12:55.453 "prchk_guard": false, 00:12:55.453 "hdgst": false, 00:12:55.453 "ddgst": false, 00:12:55.453 "dhchap_key": "key3", 00:12:55.453 "allow_unrecognized_csi": false, 00:12:55.453 "method": "bdev_nvme_attach_controller", 00:12:55.453 "req_id": 1 00:12:55.453 } 00:12:55.453 Got JSON-RPC error response 00:12:55.453 response: 00:12:55.453 { 00:12:55.453 "code": -5, 00:12:55.453 "message": "Input/output error" 00:12:55.453 } 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.453 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:55.712 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:56.280 request: 00:12:56.280 { 00:12:56.280 "name": "nvme0", 00:12:56.280 "trtype": "tcp", 00:12:56.280 "traddr": "10.0.0.3", 00:12:56.280 "adrfam": "ipv4", 00:12:56.280 "trsvcid": "4420", 00:12:56.280 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:56.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:56.280 "prchk_reftag": false, 00:12:56.280 "prchk_guard": false, 00:12:56.280 "hdgst": false, 00:12:56.280 "ddgst": false, 00:12:56.280 "dhchap_key": "key0", 00:12:56.280 "dhchap_ctrlr_key": "key1", 00:12:56.280 "allow_unrecognized_csi": false, 00:12:56.280 "method": "bdev_nvme_attach_controller", 00:12:56.280 "req_id": 1 00:12:56.280 } 00:12:56.280 Got JSON-RPC error response 00:12:56.280 response: 00:12:56.280 { 00:12:56.280 "code": -5, 00:12:56.280 "message": "Input/output error" 00:12:56.280 } 00:12:56.280 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:56.280 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:56.280 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:56.280 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:56.280 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:56.280 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:56.280 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:56.538 nvme0n1 00:12:56.538 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:56.538 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.538 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:56.800 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.800 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.800 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.059 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 00:12:57.059 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.059 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.059 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.059 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:57.059 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:57.059 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:57.993 nvme0n1 00:12:57.993 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:57.993 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.993 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:58.251 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.511 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.511 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:58.511 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid a5ef64a0-86d4-4d8b-af10-05a9f556092c -l 0 --dhchap-secret DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: --dhchap-ctrl-secret DHHC-1:03:OWJkZjU4YzUxZWI1ZTEzOGQwMjc1OWM1OTZmMDRhYjk4MmQ3ZjdhYWE5YWE0OGNiOGJhYjE0YjU0ZWU5ODYyMFlLs20=: 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.080 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:59.339 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:59.907 request: 00:12:59.907 { 00:12:59.907 "name": "nvme0", 00:12:59.907 "trtype": "tcp", 00:12:59.907 "traddr": "10.0.0.3", 00:12:59.907 "adrfam": "ipv4", 00:12:59.907 "trsvcid": "4420", 00:12:59.907 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:59.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c", 00:12:59.907 "prchk_reftag": false, 00:12:59.907 "prchk_guard": false, 00:12:59.907 "hdgst": false, 00:12:59.907 "ddgst": false, 00:12:59.907 "dhchap_key": "key1", 00:12:59.907 "allow_unrecognized_csi": false, 00:12:59.907 "method": "bdev_nvme_attach_controller", 00:12:59.907 "req_id": 1 00:12:59.907 } 00:12:59.907 Got JSON-RPC error response 00:12:59.907 response: 00:12:59.907 { 00:12:59.907 "code": -5, 00:12:59.907 "message": "Input/output error" 00:12:59.907 } 00:12:59.907 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:59.907 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:59.907 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:59.907 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:59.907 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:59.907 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:59.907 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:00.843 nvme0n1 00:13:00.843 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:00.843 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.843 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:01.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.410 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:13:01.410 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.411 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.411 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.411 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:01.411 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:01.411 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:01.986 nvme0n1 00:13:01.986 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:01.986 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:01.986 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.245 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.245 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.245 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: '' 2s 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: ]] 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzA0YjVkZjY1ODk2ODI4ZTI5NzFlZWQwYTA2MTc2YjSaAV6m: 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:02.504 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.410 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: 2s 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: ]] 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZjlhYjk0MmZhYjg4NWJlNmMzZTIxMWFhOWNhZDQ5MDU4MzU0NDFlY2M1MzU5YTlk00pfKQ==: 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:04.410 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:06.948 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.949 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.949 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.949 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:06.949 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:06.949 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:07.517 nvme0n1 00:13:07.517 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:07.517 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.517 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.517 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.517 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:07.517 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:08.084 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:08.084 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:08.084 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.342 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.342 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:13:08.342 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.342 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.602 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.602 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:08.602 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:08.602 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:08.602 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.602 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:09.234 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:09.801 request: 00:13:09.801 { 00:13:09.801 "name": "nvme0", 00:13:09.801 "dhchap_key": "key1", 00:13:09.801 "dhchap_ctrlr_key": "key3", 00:13:09.801 "method": "bdev_nvme_set_keys", 00:13:09.801 "req_id": 1 00:13:09.801 } 00:13:09.801 Got JSON-RPC error response 00:13:09.801 response: 00:13:09.801 { 00:13:09.801 "code": -13, 00:13:09.801 "message": "Permission denied" 00:13:09.801 } 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:09.801 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:11.177 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:12.112 nvme0n1 00:13:12.112 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:12.112 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.112 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:12.371 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:12.939 request: 00:13:12.939 { 00:13:12.939 "name": "nvme0", 00:13:12.939 "dhchap_key": "key2", 00:13:12.939 "dhchap_ctrlr_key": "key0", 00:13:12.939 "method": "bdev_nvme_set_keys", 00:13:12.939 "req_id": 1 00:13:12.939 } 00:13:12.939 Got JSON-RPC error response 00:13:12.939 response: 00:13:12.939 { 00:13:12.939 "code": -13, 00:13:12.939 "message": "Permission denied" 00:13:12.939 } 00:13:12.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:12.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:12.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:12.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.197 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:13.197 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:14.133 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:14.133 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.133 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67741 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67741 ']' 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67741 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.392 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67741 00:13:14.392 killing process with pid 67741 00:13:14.392 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:14.392 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:14.392 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67741' 00:13:14.392 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67741 00:13:14.392 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67741 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:14.977 rmmod nvme_tcp 00:13:14.977 rmmod nvme_fabrics 00:13:14.977 rmmod nvme_keyring 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 70776 ']' 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 70776 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70776 ']' 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70776 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70776 00:13:14.977 killing process with pid 70776 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70776' 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70776 00:13:14.977 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70776 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:15.236 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:15.495 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:15.495 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.495 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.495 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:15.495 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.495 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.495 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.da4 /tmp/spdk.key-sha256.oN1 /tmp/spdk.key-sha384.0sR /tmp/spdk.key-sha512.uS5 /tmp/spdk.key-sha512.1yh /tmp/spdk.key-sha384.STJ /tmp/spdk.key-sha256.JVd '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:15.495 ************************************ 00:13:15.495 END TEST nvmf_auth_target 00:13:15.495 ************************************ 00:13:15.495 00:13:15.495 real 3m10.845s 00:13:15.495 user 7m34.836s 00:13:15.495 sys 0m30.317s 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.495 ************************************ 00:13:15.495 START TEST nvmf_bdevio_no_huge 00:13:15.495 ************************************ 00:13:15.495 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:15.495 * Looking for test storage... 00:13:15.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:15.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.755 --rc genhtml_branch_coverage=1 00:13:15.755 --rc genhtml_function_coverage=1 00:13:15.755 --rc genhtml_legend=1 00:13:15.755 --rc geninfo_all_blocks=1 00:13:15.755 --rc geninfo_unexecuted_blocks=1 00:13:15.755 00:13:15.755 ' 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:15.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.755 --rc genhtml_branch_coverage=1 00:13:15.755 --rc genhtml_function_coverage=1 00:13:15.755 --rc genhtml_legend=1 00:13:15.755 --rc geninfo_all_blocks=1 00:13:15.755 --rc geninfo_unexecuted_blocks=1 00:13:15.755 00:13:15.755 ' 00:13:15.755 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:15.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.755 --rc genhtml_branch_coverage=1 00:13:15.755 --rc genhtml_function_coverage=1 00:13:15.755 --rc genhtml_legend=1 00:13:15.755 --rc geninfo_all_blocks=1 00:13:15.755 --rc geninfo_unexecuted_blocks=1 00:13:15.756 00:13:15.756 ' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.756 --rc genhtml_branch_coverage=1 00:13:15.756 --rc genhtml_function_coverage=1 00:13:15.756 --rc genhtml_legend=1 00:13:15.756 --rc geninfo_all_blocks=1 00:13:15.756 --rc geninfo_unexecuted_blocks=1 00:13:15.756 00:13:15.756 ' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.756 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:15.756 Cannot find device "nvmf_init_br" 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:15.756 Cannot find device "nvmf_init_br2" 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:15.756 Cannot find device "nvmf_tgt_br" 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.756 Cannot find device "nvmf_tgt_br2" 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:15.756 Cannot find device "nvmf_init_br" 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:15.756 Cannot find device "nvmf_init_br2" 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:15.756 Cannot find device "nvmf_tgt_br" 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:15.756 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:15.757 Cannot find device "nvmf_tgt_br2" 00:13:15.757 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:15.757 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:15.757 Cannot find device "nvmf_br" 00:13:15.757 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:15.757 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:16.016 Cannot find device "nvmf_init_if" 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:16.016 Cannot find device "nvmf_init_if2" 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:16.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:16.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:16.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:16.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:13:16.016 00:13:16.016 --- 10.0.0.3 ping statistics --- 00:13:16.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.016 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:16.016 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:16.016 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:13:16.016 00:13:16.016 --- 10.0.0.4 ping statistics --- 00:13:16.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.016 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:16.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:16.016 00:13:16.016 --- 10.0.0.1 ping statistics --- 00:13:16.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.016 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:16.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:16.016 00:13:16.016 --- 10.0.0.2 ping statistics --- 00:13:16.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.016 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # return 0 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:16.016 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:16.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=71418 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 71418 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71418 ']' 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.275 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:16.275 [2024-10-08 09:19:07.787505] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:16.275 [2024-10-08 09:19:07.787596] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:16.275 [2024-10-08 09:19:07.952988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.534 [2024-10-08 09:19:08.107776] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.534 [2024-10-08 09:19:08.107842] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.534 [2024-10-08 09:19:08.107856] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.534 [2024-10-08 09:19:08.107867] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.534 [2024-10-08 09:19:08.107877] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.534 [2024-10-08 09:19:08.108517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:13:16.534 [2024-10-08 09:19:08.108842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:13:16.534 [2024-10-08 09:19:08.108959] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:13:16.534 [2024-10-08 09:19:08.109079] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.534 [2024-10-08 09:19:08.115033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.471 [2024-10-08 09:19:08.877597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.471 Malloc0 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.471 [2024-10-08 09:19:08.917936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:17.471 { 00:13:17.471 "params": { 00:13:17.471 "name": "Nvme$subsystem", 00:13:17.471 "trtype": "$TEST_TRANSPORT", 00:13:17.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:17.471 "adrfam": "ipv4", 00:13:17.471 "trsvcid": "$NVMF_PORT", 00:13:17.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:17.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:17.471 "hdgst": ${hdgst:-false}, 00:13:17.471 "ddgst": ${ddgst:-false} 00:13:17.471 }, 00:13:17.471 "method": "bdev_nvme_attach_controller" 00:13:17.471 } 00:13:17.471 EOF 00:13:17.471 )") 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:13:17.471 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:17.471 "params": { 00:13:17.471 "name": "Nvme1", 00:13:17.471 "trtype": "tcp", 00:13:17.471 "traddr": "10.0.0.3", 00:13:17.471 "adrfam": "ipv4", 00:13:17.471 "trsvcid": "4420", 00:13:17.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.471 "hdgst": false, 00:13:17.471 "ddgst": false 00:13:17.471 }, 00:13:17.471 "method": "bdev_nvme_attach_controller" 00:13:17.471 }' 00:13:17.471 [2024-10-08 09:19:08.979171] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:17.471 [2024-10-08 09:19:08.979894] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71460 ] 00:13:17.471 [2024-10-08 09:19:09.127637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.730 [2024-10-08 09:19:09.282850] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.730 [2024-10-08 09:19:09.283009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.730 [2024-10-08 09:19:09.283016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.730 [2024-10-08 09:19:09.297336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.990 I/O targets: 00:13:17.990 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:17.990 00:13:17.990 00:13:17.990 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.990 http://cunit.sourceforge.net/ 00:13:17.990 00:13:17.990 00:13:17.990 Suite: bdevio tests on: Nvme1n1 00:13:17.990 Test: blockdev write read block ...passed 00:13:17.990 Test: blockdev write zeroes read block ...passed 00:13:17.990 Test: blockdev write zeroes read no split ...passed 00:13:17.990 Test: blockdev write zeroes read split ...passed 00:13:17.990 Test: blockdev write zeroes read split partial ...passed 00:13:17.990 Test: blockdev reset ...[2024-10-08 09:19:09.545510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:17.990 [2024-10-08 09:19:09.545640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232e720 (9): Bad file descriptor 00:13:17.990 [2024-10-08 09:19:09.560240] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:17.990 passed 00:13:17.990 Test: blockdev write read 8 blocks ...passed 00:13:17.990 Test: blockdev write read size > 128k ...passed 00:13:17.990 Test: blockdev write read invalid size ...passed 00:13:17.990 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.990 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.990 Test: blockdev write read max offset ...passed 00:13:17.990 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.990 Test: blockdev writev readv 8 blocks ...passed 00:13:17.990 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.990 Test: blockdev writev readv block ...passed 00:13:17.990 Test: blockdev writev readv size > 128k ...passed 00:13:17.990 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.990 Test: blockdev comparev and writev ...[2024-10-08 09:19:09.573118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.573519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.573957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.574406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.575146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.575492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.575701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.575803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.576092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.576121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.576140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.576151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.576424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.576441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.576458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.990 [2024-10-08 09:19:09.576469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:17.990 passed 00:13:17.990 Test: blockdev nvme passthru rw ...passed 00:13:17.990 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:19:09.577557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.990 [2024-10-08 09:19:09.577721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.578107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.990 [2024-10-08 09:19:09.578142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.578382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.990 [2024-10-08 09:19:09.578482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:17.990 [2024-10-08 09:19:09.578763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.990 [2024-10-08 09:19:09.578792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:17.990 passed 00:13:17.990 Test: blockdev nvme admin passthru ...passed 00:13:17.990 Test: blockdev copy ...passed 00:13:17.990 00:13:17.990 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.990 suites 1 1 n/a 0 0 00:13:17.990 tests 23 23 23 0 0 00:13:17.990 asserts 152 152 152 0 n/a 00:13:17.990 00:13:17.990 Elapsed time = 0.185 seconds 00:13:18.559 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.559 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.559 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.559 rmmod nvme_tcp 00:13:18.559 rmmod nvme_fabrics 00:13:18.559 rmmod nvme_keyring 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 71418 ']' 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 71418 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71418 ']' 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71418 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71418 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:18.559 killing process with pid 71418 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71418' 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71418 00:13:18.559 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71418 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.127 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.386 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:19.387 00:13:19.387 real 0m3.738s 00:13:19.387 user 0m11.384s 00:13:19.387 sys 0m1.496s 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.387 ************************************ 00:13:19.387 END TEST nvmf_bdevio_no_huge 00:13:19.387 ************************************ 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.387 ************************************ 00:13:19.387 START TEST nvmf_tls 00:13:19.387 ************************************ 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:19.387 * Looking for test storage... 00:13:19.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:19.387 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.387 --rc genhtml_branch_coverage=1 00:13:19.387 --rc genhtml_function_coverage=1 00:13:19.387 --rc genhtml_legend=1 00:13:19.387 --rc geninfo_all_blocks=1 00:13:19.387 --rc geninfo_unexecuted_blocks=1 00:13:19.387 00:13:19.387 ' 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.387 --rc genhtml_branch_coverage=1 00:13:19.387 --rc genhtml_function_coverage=1 00:13:19.387 --rc genhtml_legend=1 00:13:19.387 --rc geninfo_all_blocks=1 00:13:19.387 --rc geninfo_unexecuted_blocks=1 00:13:19.387 00:13:19.387 ' 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.387 --rc genhtml_branch_coverage=1 00:13:19.387 --rc genhtml_function_coverage=1 00:13:19.387 --rc genhtml_legend=1 00:13:19.387 --rc geninfo_all_blocks=1 00:13:19.387 --rc geninfo_unexecuted_blocks=1 00:13:19.387 00:13:19.387 ' 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.387 --rc genhtml_branch_coverage=1 00:13:19.387 --rc genhtml_function_coverage=1 00:13:19.387 --rc genhtml_legend=1 00:13:19.387 --rc geninfo_all_blocks=1 00:13:19.387 --rc geninfo_unexecuted_blocks=1 00:13:19.387 00:13:19.387 ' 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.387 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.646 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.647 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:19.647 Cannot find device "nvmf_init_br" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:19.647 Cannot find device "nvmf_init_br2" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:19.647 Cannot find device "nvmf_tgt_br" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.647 Cannot find device "nvmf_tgt_br2" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:19.647 Cannot find device "nvmf_init_br" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:19.647 Cannot find device "nvmf_init_br2" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:19.647 Cannot find device "nvmf_tgt_br" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:19.647 Cannot find device "nvmf_tgt_br2" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:19.647 Cannot find device "nvmf_br" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:19.647 Cannot find device "nvmf_init_if" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:19.647 Cannot find device "nvmf_init_if2" 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:19.647 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:19.648 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:19.648 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:19.648 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:19.907 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:19.907 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:13:19.907 00:13:19.907 --- 10.0.0.3 ping statistics --- 00:13:19.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.907 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:19.907 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:19.907 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:13:19.907 00:13:19.907 --- 10.0.0.4 ping statistics --- 00:13:19.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.907 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:19.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:19.907 00:13:19.907 --- 10.0.0.1 ping statistics --- 00:13:19.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.907 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:19.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:19.907 00:13:19.907 --- 10.0.0.2 ping statistics --- 00:13:19.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.907 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # return 0 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71701 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71701 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71701 ']' 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.907 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.908 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.908 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.908 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.908 [2024-10-08 09:19:11.562383] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:19.908 [2024-10-08 09:19:11.562712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.190 [2024-10-08 09:19:11.701309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.190 [2024-10-08 09:19:11.832465] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.190 [2024-10-08 09:19:11.832841] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.190 [2024-10-08 09:19:11.832958] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.190 [2024-10-08 09:19:11.833054] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.190 [2024-10-08 09:19:11.833132] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.190 [2024-10-08 09:19:11.833764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:21.130 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:21.389 true 00:13:21.389 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:21.389 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:21.647 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:21.647 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:21.647 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:21.906 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:21.906 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:22.164 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:22.164 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:22.164 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:22.423 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:22.423 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:22.681 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:22.681 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:22.681 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:22.681 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:22.940 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:22.940 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:22.940 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:23.198 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:23.198 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:23.457 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:23.457 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:23.457 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:23.716 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:23.716 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:23.974 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ENSrzk2KNc 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.P6MqKGAApN 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ENSrzk2KNc 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.P6MqKGAApN 00:13:23.975 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:24.233 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:24.800 [2024-10-08 09:19:16.199091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:24.800 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ENSrzk2KNc 00:13:24.801 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ENSrzk2KNc 00:13:24.801 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:24.801 [2024-10-08 09:19:16.473252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.061 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:25.328 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:25.587 [2024-10-08 09:19:17.017490] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:25.587 [2024-10-08 09:19:17.017806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:25.587 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:25.846 malloc0 00:13:25.846 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:25.846 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ENSrzk2KNc 00:13:26.104 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:26.362 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ENSrzk2KNc 00:13:38.567 Initializing NVMe Controllers 00:13:38.567 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.567 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:38.567 Initialization complete. Launching workers. 00:13:38.567 ======================================================== 00:13:38.567 Latency(us) 00:13:38.567 Device Information : IOPS MiB/s Average min max 00:13:38.567 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11022.40 43.06 5807.56 931.78 8574.19 00:13:38.567 ======================================================== 00:13:38.567 Total : 11022.40 43.06 5807.56 931.78 8574.19 00:13:38.567 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ENSrzk2KNc 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ENSrzk2KNc 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71935 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71935 /var/tmp/bdevperf.sock 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71935 ']' 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.567 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.567 [2024-10-08 09:19:28.208113] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:38.567 [2024-10-08 09:19:28.208234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71935 ] 00:13:38.567 [2024-10-08 09:19:28.348462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.567 [2024-10-08 09:19:28.477434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.567 [2024-10-08 09:19:28.537321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.567 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.567 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:38.567 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ENSrzk2KNc 00:13:38.567 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:38.567 [2024-10-08 09:19:29.686594] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:38.567 TLSTESTn1 00:13:38.567 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:38.567 Running I/O for 10 seconds... 00:13:40.439 4306.00 IOPS, 16.82 MiB/s [2024-10-08T09:19:33.057Z] 4359.00 IOPS, 17.03 MiB/s [2024-10-08T09:19:34.018Z] 4404.67 IOPS, 17.21 MiB/s [2024-10-08T09:19:34.954Z] 4451.75 IOPS, 17.39 MiB/s [2024-10-08T09:19:35.891Z] 4503.40 IOPS, 17.59 MiB/s [2024-10-08T09:19:37.268Z] 4509.33 IOPS, 17.61 MiB/s [2024-10-08T09:19:38.204Z] 4507.00 IOPS, 17.61 MiB/s [2024-10-08T09:19:39.144Z] 4509.62 IOPS, 17.62 MiB/s [2024-10-08T09:19:40.081Z] 4515.67 IOPS, 17.64 MiB/s [2024-10-08T09:19:40.081Z] 4519.90 IOPS, 17.66 MiB/s 00:13:48.398 Latency(us) 00:13:48.398 [2024-10-08T09:19:40.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.398 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:48.398 Verification LBA range: start 0x0 length 0x2000 00:13:48.398 TLSTESTn1 : 10.01 4525.97 17.68 0.00 0.00 28232.72 4527.94 22758.87 00:13:48.398 [2024-10-08T09:19:40.081Z] =================================================================================================================== 00:13:48.398 [2024-10-08T09:19:40.081Z] Total : 4525.97 17.68 0.00 0.00 28232.72 4527.94 22758.87 00:13:48.398 { 00:13:48.398 "results": [ 00:13:48.398 { 00:13:48.398 "job": "TLSTESTn1", 00:13:48.398 "core_mask": "0x4", 00:13:48.398 "workload": "verify", 00:13:48.398 "status": "finished", 00:13:48.398 "verify_range": { 00:13:48.398 "start": 0, 00:13:48.398 "length": 8192 00:13:48.398 }, 00:13:48.398 "queue_depth": 128, 00:13:48.398 "io_size": 4096, 00:13:48.398 "runtime": 10.014641, 00:13:48.398 "iops": 4525.973522166197, 00:13:48.398 "mibps": 17.679584070961706, 00:13:48.398 "io_failed": 0, 00:13:48.398 "io_timeout": 0, 00:13:48.398 "avg_latency_us": 28232.718692943643, 00:13:48.398 "min_latency_us": 4527.941818181818, 00:13:48.398 "max_latency_us": 22758.865454545456 00:13:48.398 } 00:13:48.398 ], 00:13:48.398 "core_count": 1 00:13:48.398 } 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71935 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71935 ']' 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71935 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71935 00:13:48.398 killing process with pid 71935 00:13:48.398 Received shutdown signal, test time was about 10.000000 seconds 00:13:48.398 00:13:48.398 Latency(us) 00:13:48.398 [2024-10-08T09:19:40.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.398 [2024-10-08T09:19:40.081Z] =================================================================================================================== 00:13:48.398 [2024-10-08T09:19:40.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71935' 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71935 00:13:48.398 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71935 00:13:48.657 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P6MqKGAApN 00:13:48.657 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P6MqKGAApN 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P6MqKGAApN 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.P6MqKGAApN 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72075 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72075 /var/tmp/bdevperf.sock 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72075 ']' 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.658 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.658 [2024-10-08 09:19:40.241134] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:48.658 [2024-10-08 09:19:40.242010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72075 ] 00:13:48.917 [2024-10-08 09:19:40.373611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.917 [2024-10-08 09:19:40.480349] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.917 [2024-10-08 09:19:40.537689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:49.853 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.853 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:49.853 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P6MqKGAApN 00:13:49.853 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:50.112 [2024-10-08 09:19:41.703918] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:50.112 [2024-10-08 09:19:41.714179] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:50.112 [2024-10-08 09:19:41.714734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6b090 (107): Transport endpoint is not connected 00:13:50.112 [2024-10-08 09:19:41.715738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6b090 (9): Bad file descriptor 00:13:50.112 [2024-10-08 09:19:41.716718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:50.112 [2024-10-08 09:19:41.716772] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:50.112 [2024-10-08 09:19:41.716784] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:50.112 [2024-10-08 09:19:41.716794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:50.112 request: 00:13:50.112 { 00:13:50.112 "name": "TLSTEST", 00:13:50.112 "trtype": "tcp", 00:13:50.112 "traddr": "10.0.0.3", 00:13:50.112 "adrfam": "ipv4", 00:13:50.112 "trsvcid": "4420", 00:13:50.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:50.112 "prchk_reftag": false, 00:13:50.112 "prchk_guard": false, 00:13:50.112 "hdgst": false, 00:13:50.112 "ddgst": false, 00:13:50.112 "psk": "key0", 00:13:50.112 "allow_unrecognized_csi": false, 00:13:50.112 "method": "bdev_nvme_attach_controller", 00:13:50.112 "req_id": 1 00:13:50.112 } 00:13:50.112 Got JSON-RPC error response 00:13:50.112 response: 00:13:50.112 { 00:13:50.112 "code": -5, 00:13:50.112 "message": "Input/output error" 00:13:50.112 } 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72075 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72075 ']' 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72075 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72075 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:50.112 killing process with pid 72075 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72075' 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72075 00:13:50.112 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.112 00:13:50.112 Latency(us) 00:13:50.112 [2024-10-08T09:19:41.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.112 [2024-10-08T09:19:41.795Z] =================================================================================================================== 00:13:50.112 [2024-10-08T09:19:41.795Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.112 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72075 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ENSrzk2KNc 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ENSrzk2KNc 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ENSrzk2KNc 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ENSrzk2KNc 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72104 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72104 /var/tmp/bdevperf.sock 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72104 ']' 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.372 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.372 [2024-10-08 09:19:42.034976] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:50.372 [2024-10-08 09:19:42.035686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72104 ] 00:13:50.631 [2024-10-08 09:19:42.177908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.631 [2024-10-08 09:19:42.261197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.890 [2024-10-08 09:19:42.318287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.457 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.457 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:51.457 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ENSrzk2KNc 00:13:51.715 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:51.974 [2024-10-08 09:19:43.523491] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.974 [2024-10-08 09:19:43.534302] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:51.974 [2024-10-08 09:19:43.534357] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:51.974 [2024-10-08 09:19:43.534402] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:51.974 [2024-10-08 09:19:43.535198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65c090 (107): Transport endpoint is not connected 00:13:51.974 [2024-10-08 09:19:43.536185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65c090 (9): Bad file descriptor 00:13:51.974 [2024-10-08 09:19:43.537181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:51.974 [2024-10-08 09:19:43.537224] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:51.974 [2024-10-08 09:19:43.537249] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:51.974 [2024-10-08 09:19:43.537261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:51.974 request: 00:13:51.974 { 00:13:51.974 "name": "TLSTEST", 00:13:51.974 "trtype": "tcp", 00:13:51.974 "traddr": "10.0.0.3", 00:13:51.974 "adrfam": "ipv4", 00:13:51.974 "trsvcid": "4420", 00:13:51.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.974 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:51.974 "prchk_reftag": false, 00:13:51.974 "prchk_guard": false, 00:13:51.974 "hdgst": false, 00:13:51.974 "ddgst": false, 00:13:51.974 "psk": "key0", 00:13:51.974 "allow_unrecognized_csi": false, 00:13:51.974 "method": "bdev_nvme_attach_controller", 00:13:51.974 "req_id": 1 00:13:51.974 } 00:13:51.974 Got JSON-RPC error response 00:13:51.974 response: 00:13:51.974 { 00:13:51.974 "code": -5, 00:13:51.974 "message": "Input/output error" 00:13:51.974 } 00:13:51.974 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72104 00:13:51.974 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72104 ']' 00:13:51.974 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72104 00:13:51.974 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:51.974 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.974 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72104 00:13:51.974 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:51.975 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:51.975 killing process with pid 72104 00:13:51.975 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72104' 00:13:51.975 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72104 00:13:51.975 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.975 00:13:51.975 Latency(us) 00:13:51.975 [2024-10-08T09:19:43.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.975 [2024-10-08T09:19:43.658Z] =================================================================================================================== 00:13:51.975 [2024-10-08T09:19:43.658Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.975 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72104 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ENSrzk2KNc 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:52.233 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ENSrzk2KNc 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ENSrzk2KNc 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ENSrzk2KNc 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72138 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72138 /var/tmp/bdevperf.sock 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72138 ']' 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.234 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.234 [2024-10-08 09:19:43.848956] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:52.234 [2024-10-08 09:19:43.849490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72138 ] 00:13:52.493 [2024-10-08 09:19:43.983774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.493 [2024-10-08 09:19:44.079464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.493 [2024-10-08 09:19:44.132952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:52.751 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.751 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:52.751 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ENSrzk2KNc 00:13:53.008 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:53.008 [2024-10-08 09:19:44.680456] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:53.008 [2024-10-08 09:19:44.685706] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:53.008 [2024-10-08 09:19:44.685792] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:53.008 [2024-10-08 09:19:44.685855] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:53.008 [2024-10-08 09:19:44.686453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2080090 (107): Transport endpoint is not connected 00:13:53.008 [2024-10-08 09:19:44.687435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2080090 (9): Bad file descriptor 00:13:53.008 [2024-10-08 09:19:44.688437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:53.008 [2024-10-08 09:19:44.688483] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:53.008 [2024-10-08 09:19:44.688510] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:53.008 [2024-10-08 09:19:44.688522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:53.267 request: 00:13:53.267 { 00:13:53.267 "name": "TLSTEST", 00:13:53.267 "trtype": "tcp", 00:13:53.267 "traddr": "10.0.0.3", 00:13:53.267 "adrfam": "ipv4", 00:13:53.267 "trsvcid": "4420", 00:13:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:53.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.267 "prchk_reftag": false, 00:13:53.267 "prchk_guard": false, 00:13:53.267 "hdgst": false, 00:13:53.267 "ddgst": false, 00:13:53.267 "psk": "key0", 00:13:53.267 "allow_unrecognized_csi": false, 00:13:53.267 "method": "bdev_nvme_attach_controller", 00:13:53.267 "req_id": 1 00:13:53.267 } 00:13:53.267 Got JSON-RPC error response 00:13:53.267 response: 00:13:53.267 { 00:13:53.267 "code": -5, 00:13:53.267 "message": "Input/output error" 00:13:53.267 } 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72138 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72138 ']' 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72138 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72138 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:53.267 killing process with pid 72138 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72138' 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72138 00:13:53.267 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.267 00:13:53.267 Latency(us) 00:13:53.267 [2024-10-08T09:19:44.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.267 [2024-10-08T09:19:44.950Z] =================================================================================================================== 00:13:53.267 [2024-10-08T09:19:44.950Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.267 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72138 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72159 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72159 /var/tmp/bdevperf.sock 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72159 ']' 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.527 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.527 [2024-10-08 09:19:45.016277] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:53.527 [2024-10-08 09:19:45.017083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72159 ] 00:13:53.527 [2024-10-08 09:19:45.155456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.786 [2024-10-08 09:19:45.240805] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.786 [2024-10-08 09:19:45.296302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.354 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.354 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:54.354 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:54.613 [2024-10-08 09:19:46.138517] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:54.613 [2024-10-08 09:19:46.139021] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:54.613 request: 00:13:54.613 { 00:13:54.613 "name": "key0", 00:13:54.613 "path": "", 00:13:54.613 "method": "keyring_file_add_key", 00:13:54.613 "req_id": 1 00:13:54.613 } 00:13:54.613 Got JSON-RPC error response 00:13:54.613 response: 00:13:54.613 { 00:13:54.613 "code": -1, 00:13:54.613 "message": "Operation not permitted" 00:13:54.613 } 00:13:54.613 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:54.873 [2024-10-08 09:19:46.418766] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.873 [2024-10-08 09:19:46.419530] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:54.873 request: 00:13:54.873 { 00:13:54.873 "name": "TLSTEST", 00:13:54.873 "trtype": "tcp", 00:13:54.873 "traddr": "10.0.0.3", 00:13:54.873 "adrfam": "ipv4", 00:13:54.873 "trsvcid": "4420", 00:13:54.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.873 "prchk_reftag": false, 00:13:54.873 "prchk_guard": false, 00:13:54.873 "hdgst": false, 00:13:54.873 "ddgst": false, 00:13:54.873 "psk": "key0", 00:13:54.873 "allow_unrecognized_csi": false, 00:13:54.873 "method": "bdev_nvme_attach_controller", 00:13:54.873 "req_id": 1 00:13:54.873 } 00:13:54.873 Got JSON-RPC error response 00:13:54.873 response: 00:13:54.873 { 00:13:54.873 "code": -126, 00:13:54.873 "message": "Required key not available" 00:13:54.873 } 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72159 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72159 ']' 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72159 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72159 00:13:54.873 killing process with pid 72159 00:13:54.873 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.873 00:13:54.873 Latency(us) 00:13:54.873 [2024-10-08T09:19:46.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.873 [2024-10-08T09:19:46.556Z] =================================================================================================================== 00:13:54.873 [2024-10-08T09:19:46.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72159' 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72159 00:13:54.873 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72159 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71701 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71701 ']' 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71701 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71701 00:13:55.133 killing process with pid 71701 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71701' 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71701 00:13:55.133 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71701 00:13:55.393 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:55.393 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:55.393 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:55.393 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:55.393 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:55.393 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:13:55.393 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.cV4vmCYsGW 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.cV4vmCYsGW 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72203 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72203 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72203 ']' 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.393 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.653 [2024-10-08 09:19:47.100823] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:55.653 [2024-10-08 09:19:47.100908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.653 [2024-10-08 09:19:47.225919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.653 [2024-10-08 09:19:47.320616] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.653 [2024-10-08 09:19:47.320966] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.653 [2024-10-08 09:19:47.321110] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.653 [2024-10-08 09:19:47.321236] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.653 [2024-10-08 09:19:47.321270] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.653 [2024-10-08 09:19:47.321761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.913 [2024-10-08 09:19:47.376365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.cV4vmCYsGW 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cV4vmCYsGW 00:13:55.913 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:56.172 [2024-10-08 09:19:47.701184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.172 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:56.432 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:56.692 [2024-10-08 09:19:48.297350] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:56.692 [2024-10-08 09:19:48.297543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:56.692 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:56.959 malloc0 00:13:56.959 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:57.235 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:13:57.495 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cV4vmCYsGW 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cV4vmCYsGW 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72251 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72251 /var/tmp/bdevperf.sock 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72251 ']' 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.755 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.755 [2024-10-08 09:19:49.276574] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:57.755 [2024-10-08 09:19:49.276689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72251 ] 00:13:57.755 [2024-10-08 09:19:49.413980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.015 [2024-10-08 09:19:49.540607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.015 [2024-10-08 09:19:49.599653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.951 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.951 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:58.951 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:13:58.951 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:59.210 [2024-10-08 09:19:50.705604] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:59.210 TLSTESTn1 00:13:59.210 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:59.210 Running I/O for 10 seconds... 00:14:01.524 4608.00 IOPS, 18.00 MiB/s [2024-10-08T09:19:54.144Z] 4672.00 IOPS, 18.25 MiB/s [2024-10-08T09:19:55.083Z] 4665.67 IOPS, 18.23 MiB/s [2024-10-08T09:19:56.020Z] 4690.00 IOPS, 18.32 MiB/s [2024-10-08T09:19:56.955Z] 4708.80 IOPS, 18.39 MiB/s [2024-10-08T09:19:57.999Z] 4752.67 IOPS, 18.57 MiB/s [2024-10-08T09:19:58.932Z] 4766.29 IOPS, 18.62 MiB/s [2024-10-08T09:20:00.308Z] 4782.62 IOPS, 18.68 MiB/s [2024-10-08T09:20:01.244Z] 4794.56 IOPS, 18.73 MiB/s [2024-10-08T09:20:01.244Z] 4799.70 IOPS, 18.75 MiB/s 00:14:09.561 Latency(us) 00:14:09.561 [2024-10-08T09:20:01.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.561 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:09.561 Verification LBA range: start 0x0 length 0x2000 00:14:09.561 TLSTESTn1 : 10.01 4805.99 18.77 0.00 0.00 26589.14 4259.84 22401.40 00:14:09.561 [2024-10-08T09:20:01.244Z] =================================================================================================================== 00:14:09.561 [2024-10-08T09:20:01.244Z] Total : 4805.99 18.77 0.00 0.00 26589.14 4259.84 22401.40 00:14:09.561 { 00:14:09.561 "results": [ 00:14:09.561 { 00:14:09.561 "job": "TLSTESTn1", 00:14:09.561 "core_mask": "0x4", 00:14:09.561 "workload": "verify", 00:14:09.561 "status": "finished", 00:14:09.561 "verify_range": { 00:14:09.561 "start": 0, 00:14:09.561 "length": 8192 00:14:09.561 }, 00:14:09.561 "queue_depth": 128, 00:14:09.561 "io_size": 4096, 00:14:09.561 "runtime": 10.013345, 00:14:09.561 "iops": 4805.986411134341, 00:14:09.561 "mibps": 18.77338441849352, 00:14:09.561 "io_failed": 0, 00:14:09.561 "io_timeout": 0, 00:14:09.561 "avg_latency_us": 26589.142480108207, 00:14:09.561 "min_latency_us": 4259.84, 00:14:09.561 "max_latency_us": 22401.396363636362 00:14:09.561 } 00:14:09.561 ], 00:14:09.561 "core_count": 1 00:14:09.561 } 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72251 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72251 ']' 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72251 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72251 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:09.561 killing process with pid 72251 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72251' 00:14:09.561 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72251 00:14:09.561 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.561 00:14:09.561 Latency(us) 00:14:09.561 [2024-10-08T09:20:01.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.562 [2024-10-08T09:20:01.245Z] =================================================================================================================== 00:14:09.562 [2024-10-08T09:20:01.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.562 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72251 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.cV4vmCYsGW 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cV4vmCYsGW 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cV4vmCYsGW 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cV4vmCYsGW 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cV4vmCYsGW 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72387 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72387 /var/tmp/bdevperf.sock 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72387 ']' 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.562 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.562 [2024-10-08 09:20:01.243344] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:09.562 [2024-10-08 09:20:01.244200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72387 ] 00:14:09.821 [2024-10-08 09:20:01.380079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.821 [2024-10-08 09:20:01.464839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.079 [2024-10-08 09:20:01.522258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.646 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.646 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:10.646 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:14:10.905 [2024-10-08 09:20:02.413988] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cV4vmCYsGW': 0100666 00:14:10.905 [2024-10-08 09:20:02.414044] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:10.905 request: 00:14:10.905 { 00:14:10.905 "name": "key0", 00:14:10.905 "path": "/tmp/tmp.cV4vmCYsGW", 00:14:10.905 "method": "keyring_file_add_key", 00:14:10.905 "req_id": 1 00:14:10.905 } 00:14:10.905 Got JSON-RPC error response 00:14:10.905 response: 00:14:10.905 { 00:14:10.905 "code": -1, 00:14:10.905 "message": "Operation not permitted" 00:14:10.905 } 00:14:10.905 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:11.165 [2024-10-08 09:20:02.654190] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.165 [2024-10-08 09:20:02.654295] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:11.165 request: 00:14:11.165 { 00:14:11.165 "name": "TLSTEST", 00:14:11.165 "trtype": "tcp", 00:14:11.165 "traddr": "10.0.0.3", 00:14:11.165 "adrfam": "ipv4", 00:14:11.165 "trsvcid": "4420", 00:14:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.165 "prchk_reftag": false, 00:14:11.165 "prchk_guard": false, 00:14:11.165 "hdgst": false, 00:14:11.165 "ddgst": false, 00:14:11.165 "psk": "key0", 00:14:11.165 "allow_unrecognized_csi": false, 00:14:11.165 "method": "bdev_nvme_attach_controller", 00:14:11.165 "req_id": 1 00:14:11.165 } 00:14:11.165 Got JSON-RPC error response 00:14:11.165 response: 00:14:11.165 { 00:14:11.165 "code": -126, 00:14:11.165 "message": "Required key not available" 00:14:11.165 } 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72387 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72387 ']' 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72387 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72387 00:14:11.165 killing process with pid 72387 00:14:11.165 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.165 00:14:11.165 Latency(us) 00:14:11.165 [2024-10-08T09:20:02.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.165 [2024-10-08T09:20:02.848Z] =================================================================================================================== 00:14:11.165 [2024-10-08T09:20:02.848Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72387' 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72387 00:14:11.165 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72387 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72203 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72203 ']' 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72203 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72203 00:14:11.424 killing process with pid 72203 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72203' 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72203 00:14:11.424 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72203 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72426 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72426 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72426 ']' 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.683 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.683 [2024-10-08 09:20:03.252711] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:11.683 [2024-10-08 09:20:03.252833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.942 [2024-10-08 09:20:03.381476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.942 [2024-10-08 09:20:03.470071] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.942 [2024-10-08 09:20:03.470150] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.942 [2024-10-08 09:20:03.470177] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.942 [2024-10-08 09:20:03.470185] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.942 [2024-10-08 09:20:03.470192] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.942 [2024-10-08 09:20:03.470631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.942 [2024-10-08 09:20:03.523580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.cV4vmCYsGW 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cV4vmCYsGW 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.cV4vmCYsGW 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cV4vmCYsGW 00:14:12.876 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.136 [2024-10-08 09:20:04.565039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.136 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.394 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:13.653 [2024-10-08 09:20:05.097191] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.653 [2024-10-08 09:20:05.097490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:13.653 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:13.915 malloc0 00:14:13.915 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:13.915 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:14:14.175 [2024-10-08 09:20:05.801051] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cV4vmCYsGW': 0100666 00:14:14.175 [2024-10-08 09:20:05.801097] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:14.175 request: 00:14:14.175 { 00:14:14.175 "name": "key0", 00:14:14.175 "path": "/tmp/tmp.cV4vmCYsGW", 00:14:14.175 "method": "keyring_file_add_key", 00:14:14.175 "req_id": 1 00:14:14.175 } 00:14:14.175 Got JSON-RPC error response 00:14:14.175 response: 00:14:14.175 { 00:14:14.175 "code": -1, 00:14:14.175 "message": "Operation not permitted" 00:14:14.175 } 00:14:14.175 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:14.435 [2024-10-08 09:20:06.017106] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:14.435 [2024-10-08 09:20:06.017217] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:14.435 request: 00:14:14.435 { 00:14:14.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.435 "host": "nqn.2016-06.io.spdk:host1", 00:14:14.435 "psk": "key0", 00:14:14.435 "method": "nvmf_subsystem_add_host", 00:14:14.435 "req_id": 1 00:14:14.435 } 00:14:14.435 Got JSON-RPC error response 00:14:14.435 response: 00:14:14.435 { 00:14:14.435 "code": -32603, 00:14:14.435 "message": "Internal error" 00:14:14.435 } 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72426 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72426 ']' 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72426 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72426 00:14:14.435 killing process with pid 72426 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72426' 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72426 00:14:14.435 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72426 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.cV4vmCYsGW 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72495 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72495 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72495 ']' 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.694 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.952 [2024-10-08 09:20:06.380323] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:14.952 [2024-10-08 09:20:06.380445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.952 [2024-10-08 09:20:06.514577] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.952 [2024-10-08 09:20:06.604753] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.952 [2024-10-08 09:20:06.604835] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.952 [2024-10-08 09:20:06.604846] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.952 [2024-10-08 09:20:06.604853] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.952 [2024-10-08 09:20:06.604859] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.952 [2024-10-08 09:20:06.605215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.221 [2024-10-08 09:20:06.657059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.cV4vmCYsGW 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cV4vmCYsGW 00:14:15.801 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:16.060 [2024-10-08 09:20:07.598324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.060 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:16.319 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:16.577 [2024-10-08 09:20:08.046429] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:16.577 [2024-10-08 09:20:08.046665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:16.577 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:16.838 malloc0 00:14:16.838 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:17.098 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:14:17.357 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72545 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72545 /var/tmp/bdevperf.sock 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72545 ']' 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.616 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.616 [2024-10-08 09:20:09.187895] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:17.616 [2024-10-08 09:20:09.188006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72545 ] 00:14:17.874 [2024-10-08 09:20:09.327720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.874 [2024-10-08 09:20:09.430028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.874 [2024-10-08 09:20:09.486181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.441 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.441 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:18.441 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:14:18.700 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:18.958 [2024-10-08 09:20:10.562649] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.958 TLSTESTn1 00:14:19.217 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:19.475 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:19.476 "subsystems": [ 00:14:19.476 { 00:14:19.476 "subsystem": "keyring", 00:14:19.476 "config": [ 00:14:19.476 { 00:14:19.476 "method": "keyring_file_add_key", 00:14:19.476 "params": { 00:14:19.476 "name": "key0", 00:14:19.476 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:19.476 } 00:14:19.476 } 00:14:19.476 ] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "iobuf", 00:14:19.476 "config": [ 00:14:19.476 { 00:14:19.476 "method": "iobuf_set_options", 00:14:19.476 "params": { 00:14:19.476 "small_pool_count": 8192, 00:14:19.476 "large_pool_count": 1024, 00:14:19.476 "small_bufsize": 8192, 00:14:19.476 "large_bufsize": 135168 00:14:19.476 } 00:14:19.476 } 00:14:19.476 ] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "sock", 00:14:19.476 "config": [ 00:14:19.476 { 00:14:19.476 "method": "sock_set_default_impl", 00:14:19.476 "params": { 00:14:19.476 "impl_name": "uring" 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "sock_impl_set_options", 00:14:19.476 "params": { 00:14:19.476 "impl_name": "ssl", 00:14:19.476 "recv_buf_size": 4096, 00:14:19.476 "send_buf_size": 4096, 00:14:19.476 "enable_recv_pipe": true, 00:14:19.476 "enable_quickack": false, 00:14:19.476 "enable_placement_id": 0, 00:14:19.476 "enable_zerocopy_send_server": true, 00:14:19.476 "enable_zerocopy_send_client": false, 00:14:19.476 "zerocopy_threshold": 0, 00:14:19.476 "tls_version": 0, 00:14:19.476 "enable_ktls": false 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "sock_impl_set_options", 00:14:19.476 "params": { 00:14:19.476 "impl_name": "posix", 00:14:19.476 "recv_buf_size": 2097152, 00:14:19.476 "send_buf_size": 2097152, 00:14:19.476 "enable_recv_pipe": true, 00:14:19.476 "enable_quickack": false, 00:14:19.476 "enable_placement_id": 0, 00:14:19.476 "enable_zerocopy_send_server": true, 00:14:19.476 "enable_zerocopy_send_client": false, 00:14:19.476 "zerocopy_threshold": 0, 00:14:19.476 "tls_version": 0, 00:14:19.476 "enable_ktls": false 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "sock_impl_set_options", 00:14:19.476 "params": { 00:14:19.476 "impl_name": "uring", 00:14:19.476 "recv_buf_size": 2097152, 00:14:19.476 "send_buf_size": 2097152, 00:14:19.476 "enable_recv_pipe": true, 00:14:19.476 "enable_quickack": false, 00:14:19.476 "enable_placement_id": 0, 00:14:19.476 "enable_zerocopy_send_server": false, 00:14:19.476 "enable_zerocopy_send_client": false, 00:14:19.476 "zerocopy_threshold": 0, 00:14:19.476 "tls_version": 0, 00:14:19.476 "enable_ktls": false 00:14:19.476 } 00:14:19.476 } 00:14:19.476 ] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "vmd", 00:14:19.476 "config": [] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "accel", 00:14:19.476 "config": [ 00:14:19.476 { 00:14:19.476 "method": "accel_set_options", 00:14:19.476 "params": { 00:14:19.476 "small_cache_size": 128, 00:14:19.476 "large_cache_size": 16, 00:14:19.476 "task_count": 2048, 00:14:19.476 "sequence_count": 2048, 00:14:19.476 "buf_count": 2048 00:14:19.476 } 00:14:19.476 } 00:14:19.476 ] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "bdev", 00:14:19.476 "config": [ 00:14:19.476 { 00:14:19.476 "method": "bdev_set_options", 00:14:19.476 "params": { 00:14:19.476 "bdev_io_pool_size": 65535, 00:14:19.476 "bdev_io_cache_size": 256, 00:14:19.476 "bdev_auto_examine": true, 00:14:19.476 "iobuf_small_cache_size": 128, 00:14:19.476 "iobuf_large_cache_size": 16 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "bdev_raid_set_options", 00:14:19.476 "params": { 00:14:19.476 "process_window_size_kb": 1024, 00:14:19.476 "process_max_bandwidth_mb_sec": 0 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "bdev_iscsi_set_options", 00:14:19.476 "params": { 00:14:19.476 "timeout_sec": 30 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "bdev_nvme_set_options", 00:14:19.476 "params": { 00:14:19.476 "action_on_timeout": "none", 00:14:19.476 "timeout_us": 0, 00:14:19.476 "timeout_admin_us": 0, 00:14:19.476 "keep_alive_timeout_ms": 10000, 00:14:19.476 "arbitration_burst": 0, 00:14:19.476 "low_priority_weight": 0, 00:14:19.476 "medium_priority_weight": 0, 00:14:19.476 "high_priority_weight": 0, 00:14:19.476 "nvme_adminq_poll_period_us": 10000, 00:14:19.476 "nvme_ioq_poll_period_us": 0, 00:14:19.476 "io_queue_requests": 0, 00:14:19.476 "delay_cmd_submit": true, 00:14:19.476 "transport_retry_count": 4, 00:14:19.476 "bdev_retry_count": 3, 00:14:19.476 "transport_ack_timeout": 0, 00:14:19.476 "ctrlr_loss_timeout_sec": 0, 00:14:19.476 "reconnect_delay_sec": 0, 00:14:19.476 "fast_io_fail_timeout_sec": 0, 00:14:19.476 "disable_auto_failback": false, 00:14:19.476 "generate_uuids": false, 00:14:19.476 "transport_tos": 0, 00:14:19.476 "nvme_error_stat": false, 00:14:19.476 "rdma_srq_size": 0, 00:14:19.476 "io_path_stat": false, 00:14:19.476 "allow_accel_sequence": false, 00:14:19.476 "rdma_max_cq_size": 0, 00:14:19.476 "rdma_cm_event_timeout_ms": 0, 00:14:19.476 "dhchap_digests": [ 00:14:19.476 "sha256", 00:14:19.476 "sha384", 00:14:19.476 "sha512" 00:14:19.476 ], 00:14:19.476 "dhchap_dhgroups": [ 00:14:19.476 "null", 00:14:19.476 "ffdhe2048", 00:14:19.476 "ffdhe3072", 00:14:19.476 "ffdhe4096", 00:14:19.476 "ffdhe6144", 00:14:19.476 "ffdhe8192" 00:14:19.476 ] 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "bdev_nvme_set_hotplug", 00:14:19.476 "params": { 00:14:19.476 "period_us": 100000, 00:14:19.476 "enable": false 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "bdev_malloc_create", 00:14:19.476 "params": { 00:14:19.476 "name": "malloc0", 00:14:19.476 "num_blocks": 8192, 00:14:19.476 "block_size": 4096, 00:14:19.476 "physical_block_size": 4096, 00:14:19.476 "uuid": "c8ed6d15-b5a7-4431-94b5-f469189d3af6", 00:14:19.476 "optimal_io_boundary": 0, 00:14:19.476 "md_size": 0, 00:14:19.476 "dif_type": 0, 00:14:19.476 "dif_is_head_of_md": false, 00:14:19.476 "dif_pi_format": 0 00:14:19.476 } 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "method": "bdev_wait_for_examine" 00:14:19.476 } 00:14:19.476 ] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "nbd", 00:14:19.476 "config": [] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "scheduler", 00:14:19.476 "config": [ 00:14:19.476 { 00:14:19.476 "method": "framework_set_scheduler", 00:14:19.476 "params": { 00:14:19.476 "name": "static" 00:14:19.476 } 00:14:19.476 } 00:14:19.476 ] 00:14:19.476 }, 00:14:19.476 { 00:14:19.476 "subsystem": "nvmf", 00:14:19.476 "config": [ 00:14:19.476 { 00:14:19.477 "method": "nvmf_set_config", 00:14:19.477 "params": { 00:14:19.477 "discovery_filter": "match_any", 00:14:19.477 "admin_cmd_passthru": { 00:14:19.477 "identify_ctrlr": false 00:14:19.477 }, 00:14:19.477 "dhchap_digests": [ 00:14:19.477 "sha256", 00:14:19.477 "sha384", 00:14:19.477 "sha512" 00:14:19.477 ], 00:14:19.477 "dhchap_dhgroups": [ 00:14:19.477 "null", 00:14:19.477 "ffdhe2048", 00:14:19.477 "ffdhe3072", 00:14:19.477 "ffdhe4096", 00:14:19.477 "ffdhe6144", 00:14:19.477 "ffdhe8192" 00:14:19.477 ] 00:14:19.477 } 00:14:19.477 }, 00:14:19.477 { 00:14:19.477 "method": "nvmf_set_max_subsystems", 00:14:19.477 "params": { 00:14:19.477 "max_subsystems": 1024 00:14:19.477 } 00:14:19.477 }, 00:14:19.477 { 00:14:19.477 "method": "nvmf_set_crdt", 00:14:19.477 "params": { 00:14:19.477 "crdt1": 0, 00:14:19.477 "crdt2": 0, 00:14:19.477 "crdt3": 0 00:14:19.477 } 00:14:19.477 }, 00:14:19.477 { 00:14:19.477 "method": "nvmf_create_transport", 00:14:19.477 "params": { 00:14:19.477 "trtype": "TCP", 00:14:19.477 "max_queue_depth": 128, 00:14:19.477 "max_io_qpairs_per_ctrlr": 127, 00:14:19.477 "in_capsule_data_size": 4096, 00:14:19.477 "max_io_size": 131072, 00:14:19.477 "io_unit_size": 131072, 00:14:19.477 "max_aq_depth": 128, 00:14:19.477 "num_shared_buffers": 511, 00:14:19.477 "buf_cache_size": 4294967295, 00:14:19.477 "dif_insert_or_strip": false, 00:14:19.477 "zcopy": false, 00:14:19.477 "c2h_success": false, 00:14:19.477 "sock_priority": 0, 00:14:19.477 "abort_timeout_sec": 1, 00:14:19.477 "ack_timeout": 0, 00:14:19.477 "data_wr_pool_size": 0 00:14:19.477 } 00:14:19.477 }, 00:14:19.477 { 00:14:19.477 "method": "nvmf_create_subsystem", 00:14:19.477 "params": { 00:14:19.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.477 "allow_any_host": false, 00:14:19.477 "serial_number": "SPDK00000000000001", 00:14:19.477 "model_number": "SPDK bdev Controller", 00:14:19.477 "max_namespaces": 10, 00:14:19.477 "min_cntlid": 1, 00:14:19.477 "max_cntlid": 65519, 00:14:19.477 "ana_reporting": false 00:14:19.477 } 00:14:19.477 }, 00:14:19.477 { 00:14:19.477 "method": "nvmf_subsystem_add_host", 00:14:19.477 "params": { 00:14:19.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.477 "host": "nqn.2016-06.io.spdk:host1", 00:14:19.477 "psk": "key0" 00:14:19.477 } 00:14:19.477 }, 00:14:19.477 { 00:14:19.477 "method": "nvmf_subsystem_add_ns", 00:14:19.477 "params": { 00:14:19.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.477 "namespace": { 00:14:19.477 "nsid": 1, 00:14:19.477 "bdev_name": "malloc0", 00:14:19.477 "nguid": "C8ED6D15B5A7443194B5F469189D3AF6", 00:14:19.477 "uuid": "c8ed6d15-b5a7-4431-94b5-f469189d3af6", 00:14:19.477 "no_auto_visible": false 00:14:19.477 } 00:14:19.477 } 00:14:19.477 }, 00:14:19.477 { 00:14:19.477 "method": "nvmf_subsystem_add_listener", 00:14:19.477 "params": { 00:14:19.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.477 "listen_address": { 00:14:19.477 "trtype": "TCP", 00:14:19.477 "adrfam": "IPv4", 00:14:19.477 "traddr": "10.0.0.3", 00:14:19.477 "trsvcid": "4420" 00:14:19.477 }, 00:14:19.477 "secure_channel": true 00:14:19.477 } 00:14:19.477 } 00:14:19.477 ] 00:14:19.477 } 00:14:19.477 ] 00:14:19.477 }' 00:14:19.477 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:19.736 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:19.736 "subsystems": [ 00:14:19.736 { 00:14:19.736 "subsystem": "keyring", 00:14:19.736 "config": [ 00:14:19.736 { 00:14:19.736 "method": "keyring_file_add_key", 00:14:19.736 "params": { 00:14:19.736 "name": "key0", 00:14:19.736 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:19.736 } 00:14:19.736 } 00:14:19.736 ] 00:14:19.736 }, 00:14:19.736 { 00:14:19.736 "subsystem": "iobuf", 00:14:19.736 "config": [ 00:14:19.736 { 00:14:19.736 "method": "iobuf_set_options", 00:14:19.736 "params": { 00:14:19.736 "small_pool_count": 8192, 00:14:19.737 "large_pool_count": 1024, 00:14:19.737 "small_bufsize": 8192, 00:14:19.737 "large_bufsize": 135168 00:14:19.737 } 00:14:19.737 } 00:14:19.737 ] 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "subsystem": "sock", 00:14:19.737 "config": [ 00:14:19.737 { 00:14:19.737 "method": "sock_set_default_impl", 00:14:19.737 "params": { 00:14:19.737 "impl_name": "uring" 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "sock_impl_set_options", 00:14:19.737 "params": { 00:14:19.737 "impl_name": "ssl", 00:14:19.737 "recv_buf_size": 4096, 00:14:19.737 "send_buf_size": 4096, 00:14:19.737 "enable_recv_pipe": true, 00:14:19.737 "enable_quickack": false, 00:14:19.737 "enable_placement_id": 0, 00:14:19.737 "enable_zerocopy_send_server": true, 00:14:19.737 "enable_zerocopy_send_client": false, 00:14:19.737 "zerocopy_threshold": 0, 00:14:19.737 "tls_version": 0, 00:14:19.737 "enable_ktls": false 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "sock_impl_set_options", 00:14:19.737 "params": { 00:14:19.737 "impl_name": "posix", 00:14:19.737 "recv_buf_size": 2097152, 00:14:19.737 "send_buf_size": 2097152, 00:14:19.737 "enable_recv_pipe": true, 00:14:19.737 "enable_quickack": false, 00:14:19.737 "enable_placement_id": 0, 00:14:19.737 "enable_zerocopy_send_server": true, 00:14:19.737 "enable_zerocopy_send_client": false, 00:14:19.737 "zerocopy_threshold": 0, 00:14:19.737 "tls_version": 0, 00:14:19.737 "enable_ktls": false 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "sock_impl_set_options", 00:14:19.737 "params": { 00:14:19.737 "impl_name": "uring", 00:14:19.737 "recv_buf_size": 2097152, 00:14:19.737 "send_buf_size": 2097152, 00:14:19.737 "enable_recv_pipe": true, 00:14:19.737 "enable_quickack": false, 00:14:19.737 "enable_placement_id": 0, 00:14:19.737 "enable_zerocopy_send_server": false, 00:14:19.737 "enable_zerocopy_send_client": false, 00:14:19.737 "zerocopy_threshold": 0, 00:14:19.737 "tls_version": 0, 00:14:19.737 "enable_ktls": false 00:14:19.737 } 00:14:19.737 } 00:14:19.737 ] 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "subsystem": "vmd", 00:14:19.737 "config": [] 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "subsystem": "accel", 00:14:19.737 "config": [ 00:14:19.737 { 00:14:19.737 "method": "accel_set_options", 00:14:19.737 "params": { 00:14:19.737 "small_cache_size": 128, 00:14:19.737 "large_cache_size": 16, 00:14:19.737 "task_count": 2048, 00:14:19.737 "sequence_count": 2048, 00:14:19.737 "buf_count": 2048 00:14:19.737 } 00:14:19.737 } 00:14:19.737 ] 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "subsystem": "bdev", 00:14:19.737 "config": [ 00:14:19.737 { 00:14:19.737 "method": "bdev_set_options", 00:14:19.737 "params": { 00:14:19.737 "bdev_io_pool_size": 65535, 00:14:19.737 "bdev_io_cache_size": 256, 00:14:19.737 "bdev_auto_examine": true, 00:14:19.737 "iobuf_small_cache_size": 128, 00:14:19.737 "iobuf_large_cache_size": 16 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "bdev_raid_set_options", 00:14:19.737 "params": { 00:14:19.737 "process_window_size_kb": 1024, 00:14:19.737 "process_max_bandwidth_mb_sec": 0 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "bdev_iscsi_set_options", 00:14:19.737 "params": { 00:14:19.737 "timeout_sec": 30 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "bdev_nvme_set_options", 00:14:19.737 "params": { 00:14:19.737 "action_on_timeout": "none", 00:14:19.737 "timeout_us": 0, 00:14:19.737 "timeout_admin_us": 0, 00:14:19.737 "keep_alive_timeout_ms": 10000, 00:14:19.737 "arbitration_burst": 0, 00:14:19.737 "low_priority_weight": 0, 00:14:19.737 "medium_priority_weight": 0, 00:14:19.737 "high_priority_weight": 0, 00:14:19.737 "nvme_adminq_poll_period_us": 10000, 00:14:19.737 "nvme_ioq_poll_period_us": 0, 00:14:19.737 "io_queue_requests": 512, 00:14:19.737 "delay_cmd_submit": true, 00:14:19.737 "transport_retry_count": 4, 00:14:19.737 "bdev_retry_count": 3, 00:14:19.737 "transport_ack_timeout": 0, 00:14:19.737 "ctrlr_loss_timeout_sec": 0, 00:14:19.737 "reconnect_delay_sec": 0, 00:14:19.737 "fast_io_fail_timeout_sec": 0, 00:14:19.737 "disable_auto_failback": false, 00:14:19.737 "generate_uuids": false, 00:14:19.737 "transport_tos": 0, 00:14:19.737 "nvme_error_stat": false, 00:14:19.737 "rdma_srq_size": 0, 00:14:19.737 "io_path_stat": false, 00:14:19.737 "allow_accel_sequence": false, 00:14:19.737 "rdma_max_cq_size": 0, 00:14:19.737 "rdma_cm_event_timeout_ms": 0, 00:14:19.737 "dhchap_digests": [ 00:14:19.737 "sha256", 00:14:19.737 "sha384", 00:14:19.737 "sha512" 00:14:19.737 ], 00:14:19.737 "dhchap_dhgroups": [ 00:14:19.737 "null", 00:14:19.737 "ffdhe2048", 00:14:19.737 "ffdhe3072", 00:14:19.737 "ffdhe4096", 00:14:19.737 "ffdhe6144", 00:14:19.737 "ffdhe8192" 00:14:19.737 ] 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "bdev_nvme_attach_controller", 00:14:19.737 "params": { 00:14:19.737 "name": "TLSTEST", 00:14:19.737 "trtype": "TCP", 00:14:19.737 "adrfam": "IPv4", 00:14:19.737 "traddr": "10.0.0.3", 00:14:19.737 "trsvcid": "4420", 00:14:19.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.737 "prchk_reftag": false, 00:14:19.737 "prchk_guard": false, 00:14:19.737 "ctrlr_loss_timeout_sec": 0, 00:14:19.737 "reconnect_delay_sec": 0, 00:14:19.737 "fast_io_fail_timeout_sec": 0, 00:14:19.737 "psk": "key0", 00:14:19.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.737 "hdgst": false, 00:14:19.737 "ddgst": false, 00:14:19.737 "multipath": "multipath" 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "bdev_nvme_set_hotplug", 00:14:19.737 "params": { 00:14:19.737 "period_us": 100000, 00:14:19.737 "enable": false 00:14:19.737 } 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "method": "bdev_wait_for_examine" 00:14:19.737 } 00:14:19.737 ] 00:14:19.737 }, 00:14:19.737 { 00:14:19.737 "subsystem": "nbd", 00:14:19.737 "config": [] 00:14:19.737 } 00:14:19.737 ] 00:14:19.737 }' 00:14:19.737 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72545 00:14:19.737 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72545 ']' 00:14:19.737 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72545 00:14:19.737 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:19.737 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.737 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72545 00:14:19.737 killing process with pid 72545 00:14:19.737 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.737 00:14:19.737 Latency(us) 00:14:19.737 [2024-10-08T09:20:11.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.737 [2024-10-08T09:20:11.420Z] =================================================================================================================== 00:14:19.737 [2024-10-08T09:20:11.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.737 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:19.738 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:19.738 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72545' 00:14:19.738 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72545 00:14:19.738 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72545 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72495 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72495 ']' 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72495 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72495 00:14:19.998 killing process with pid 72495 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72495' 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72495 00:14:19.998 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72495 00:14:20.257 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:20.257 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:20.257 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.257 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.257 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:20.257 "subsystems": [ 00:14:20.257 { 00:14:20.257 "subsystem": "keyring", 00:14:20.257 "config": [ 00:14:20.257 { 00:14:20.257 "method": "keyring_file_add_key", 00:14:20.257 "params": { 00:14:20.257 "name": "key0", 00:14:20.257 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:20.257 } 00:14:20.257 } 00:14:20.257 ] 00:14:20.257 }, 00:14:20.257 { 00:14:20.257 "subsystem": "iobuf", 00:14:20.257 "config": [ 00:14:20.257 { 00:14:20.257 "method": "iobuf_set_options", 00:14:20.257 "params": { 00:14:20.257 "small_pool_count": 8192, 00:14:20.258 "large_pool_count": 1024, 00:14:20.258 "small_bufsize": 8192, 00:14:20.258 "large_bufsize": 135168 00:14:20.258 } 00:14:20.258 } 00:14:20.258 ] 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "subsystem": "sock", 00:14:20.258 "config": [ 00:14:20.258 { 00:14:20.258 "method": "sock_set_default_impl", 00:14:20.258 "params": { 00:14:20.258 "impl_name": "uring" 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "sock_impl_set_options", 00:14:20.258 "params": { 00:14:20.258 "impl_name": "ssl", 00:14:20.258 "recv_buf_size": 4096, 00:14:20.258 "send_buf_size": 4096, 00:14:20.258 "enable_recv_pipe": true, 00:14:20.258 "enable_quickack": false, 00:14:20.258 "enable_placement_id": 0, 00:14:20.258 "enable_zerocopy_send_server": true, 00:14:20.258 "enable_zerocopy_send_client": false, 00:14:20.258 "zerocopy_threshold": 0, 00:14:20.258 "tls_version": 0, 00:14:20.258 "enable_ktls": false 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "sock_impl_set_options", 00:14:20.258 "params": { 00:14:20.258 "impl_name": "posix", 00:14:20.258 "recv_buf_size": 2097152, 00:14:20.258 "send_buf_size": 2097152, 00:14:20.258 "enable_recv_pipe": true, 00:14:20.258 "enable_quickack": false, 00:14:20.258 "enable_placement_id": 0, 00:14:20.258 "enable_zerocopy_send_server": true, 00:14:20.258 "enable_zerocopy_send_client": false, 00:14:20.258 "zerocopy_threshold": 0, 00:14:20.258 "tls_version": 0, 00:14:20.258 "enable_ktls": false 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "sock_impl_set_options", 00:14:20.258 "params": { 00:14:20.258 "impl_name": "uring", 00:14:20.258 "recv_buf_size": 2097152, 00:14:20.258 "send_buf_size": 2097152, 00:14:20.258 "enable_recv_pipe": true, 00:14:20.258 "enable_quickack": false, 00:14:20.258 "enable_placement_id": 0, 00:14:20.258 "enable_zerocopy_send_server": false, 00:14:20.258 "enable_zerocopy_send_client": false, 00:14:20.258 "zerocopy_threshold": 0, 00:14:20.258 "tls_version": 0, 00:14:20.258 "enable_ktls": false 00:14:20.258 } 00:14:20.258 } 00:14:20.258 ] 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "subsystem": "vmd", 00:14:20.258 "config": [] 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "subsystem": "accel", 00:14:20.258 "config": [ 00:14:20.258 { 00:14:20.258 "method": "accel_set_options", 00:14:20.258 "params": { 00:14:20.258 "small_cache_size": 128, 00:14:20.258 "large_cache_size": 16, 00:14:20.258 "task_count": 2048, 00:14:20.258 "sequence_count": 2048, 00:14:20.258 "buf_count": 2048 00:14:20.258 } 00:14:20.258 } 00:14:20.258 ] 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "subsystem": "bdev", 00:14:20.258 "config": [ 00:14:20.258 { 00:14:20.258 "method": "bdev_set_options", 00:14:20.258 "params": { 00:14:20.258 "bdev_io_pool_size": 65535, 00:14:20.258 "bdev_io_cache_size": 256, 00:14:20.258 "bdev_auto_examine": true, 00:14:20.258 "iobuf_small_cache_size": 128, 00:14:20.258 "iobuf_large_cache_size": 16 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "bdev_raid_set_options", 00:14:20.258 "params": { 00:14:20.258 "process_window_size_kb": 1024, 00:14:20.258 "process_max_bandwidth_mb_sec": 0 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "bdev_iscsi_set_options", 00:14:20.258 "params": { 00:14:20.258 "timeout_sec": 30 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "bdev_nvme_set_options", 00:14:20.258 "params": { 00:14:20.258 "action_on_timeout": "none", 00:14:20.258 "timeout_us": 0, 00:14:20.258 "timeout_admin_us": 0, 00:14:20.258 "keep_alive_timeout_ms": 10000, 00:14:20.258 "arbitration_burst": 0, 00:14:20.258 "low_priority_weight": 0, 00:14:20.258 "medium_priority_weight": 0, 00:14:20.258 "high_priority_weight": 0, 00:14:20.258 "nvme_adminq_poll_period_us": 10000, 00:14:20.258 "nvme_ioq_poll_period_us": 0, 00:14:20.258 "io_queue_requests": 0, 00:14:20.258 "delay_cmd_submit": true, 00:14:20.258 "transport_retry_count": 4, 00:14:20.258 "bdev_retry_count": 3, 00:14:20.258 "transport_ack_timeout": 0, 00:14:20.258 "ctrlr_loss_timeout_sec": 0, 00:14:20.258 "reconnect_delay_sec": 0, 00:14:20.258 "fast_io_fail_timeout_sec": 0, 00:14:20.258 "disable_auto_failback": false, 00:14:20.258 "generate_uuids": false, 00:14:20.258 "transport_tos": 0, 00:14:20.258 "nvme_error_stat": false, 00:14:20.258 "rdma_srq_size": 0, 00:14:20.258 "io_path_stat": false, 00:14:20.258 "allow_accel_sequence": false, 00:14:20.258 "rdma_max_cq_size": 0, 00:14:20.258 "rdma_cm_event_timeout_ms": 0, 00:14:20.258 "dhchap_digests": [ 00:14:20.258 "sha256", 00:14:20.258 "sha384", 00:14:20.258 "sha512" 00:14:20.258 ], 00:14:20.258 "dhchap_dhgroups": [ 00:14:20.258 "null", 00:14:20.258 "ffdhe2048", 00:14:20.258 "ffdhe3072", 00:14:20.258 "ffdhe4096", 00:14:20.258 "ffdhe6144", 00:14:20.258 "ffdhe8192" 00:14:20.258 ] 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "bdev_nvme_set_hotplug", 00:14:20.258 "params": { 00:14:20.258 "period_us": 100000, 00:14:20.258 "enable": false 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "bdev_malloc_create", 00:14:20.258 "params": { 00:14:20.258 "name": "malloc0", 00:14:20.258 "num_blocks": 8192, 00:14:20.258 "block_size": 4096, 00:14:20.258 "physical_block_size": 4096, 00:14:20.258 "uuid": "c8ed6d15-b5a7-4431-94b5-f469189d3af6", 00:14:20.258 "optimal_io_boundary": 0, 00:14:20.258 "md_size": 0, 00:14:20.258 "dif_type": 0, 00:14:20.258 "dif_is_head_of_md": false, 00:14:20.258 "dif_pi_format": 0 00:14:20.258 } 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "method": "bdev_wait_for_examine" 00:14:20.258 } 00:14:20.258 ] 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "subsystem": "nbd", 00:14:20.258 "config": [] 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "subsystem": "scheduler", 00:14:20.258 "config": [ 00:14:20.258 { 00:14:20.258 "method": "framework_set_scheduler", 00:14:20.258 "params": { 00:14:20.258 "name": "static" 00:14:20.258 } 00:14:20.258 } 00:14:20.258 ] 00:14:20.258 }, 00:14:20.258 { 00:14:20.258 "subsystem": "nvmf", 00:14:20.258 "config": [ 00:14:20.258 { 00:14:20.258 "method": "nvmf_set_config", 00:14:20.258 "params": { 00:14:20.258 "discovery_filter": "match_any", 00:14:20.258 "admin_cmd_passthru": { 00:14:20.258 "identify_ctrlr": false 00:14:20.258 }, 00:14:20.258 "dhchap_digests": [ 00:14:20.258 "sha256", 00:14:20.258 "sha384", 00:14:20.258 "sha512" 00:14:20.258 ], 00:14:20.258 "dhchap_dhgroups": [ 00:14:20.258 "null", 00:14:20.258 "ffdhe2048", 00:14:20.258 "ffdhe3072", 00:14:20.258 "ffdhe4096", 00:14:20.258 "ffdhe6144", 00:14:20.258 "ffdhe8192" 00:14:20.259 ] 00:14:20.259 } 00:14:20.259 }, 00:14:20.259 { 00:14:20.259 "method": "nvmf_set_max_subsystems", 00:14:20.259 "params": { 00:14:20.259 "max_subsystems": 1024 00:14:20.259 } 00:14:20.259 }, 00:14:20.259 { 00:14:20.259 "method": "nvmf_set_crdt", 00:14:20.259 "params": { 00:14:20.259 "crdt1": 0, 00:14:20.259 "crdt2": 0, 00:14:20.259 "crdt3": 0 00:14:20.259 } 00:14:20.259 }, 00:14:20.259 { 00:14:20.259 "method": "nvmf_create_transport", 00:14:20.259 "params": { 00:14:20.259 "trtype": "TCP", 00:14:20.259 "max_queue_depth": 128, 00:14:20.259 "max_io_qpairs_per_ctrlr": 127, 00:14:20.259 "in_capsule_data_size": 4096, 00:14:20.259 "max_io_size": 131072, 00:14:20.259 "io_unit_size": 131072, 00:14:20.259 "max_aq_depth": 128, 00:14:20.259 "num_shared_buffers": 511, 00:14:20.259 "buf_cache_size": 4294967295, 00:14:20.259 "dif_insert_or_strip": false, 00:14:20.259 "zcopy": false, 00:14:20.259 "c2h_success": false, 00:14:20.259 "sock_priority": 0, 00:14:20.259 "abort_timeout_sec": 1, 00:14:20.259 "ack_timeout": 0, 00:14:20.259 "data_wr_pool_size": 0 00:14:20.259 } 00:14:20.259 }, 00:14:20.259 { 00:14:20.259 "method": "nvmf_create_subsystem", 00:14:20.259 "params": { 00:14:20.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.259 "allow_any_host": false, 00:14:20.259 "serial_number": "SPDK00000000000001", 00:14:20.259 "model_number": "SPDK bdev Controller", 00:14:20.259 "max_namespaces": 10, 00:14:20.259 "min_cntlid": 1, 00:14:20.259 "max_cntlid": 65519, 00:14:20.259 "ana_reporting": false 00:14:20.259 } 00:14:20.259 }, 00:14:20.259 { 00:14:20.259 "method": "nvmf_subsystem_add_host", 00:14:20.259 "params": { 00:14:20.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.259 "host": "nqn.2016-06.io.spdk:host1", 00:14:20.259 "psk": "key0" 00:14:20.259 } 00:14:20.259 }, 00:14:20.259 { 00:14:20.259 "method": "nvmf_subsystem_add_ns", 00:14:20.259 "params": { 00:14:20.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.259 "namespace": { 00:14:20.259 "nsid": 1, 00:14:20.259 "bdev_name": "malloc0", 00:14:20.259 "nguid": "C8ED6D15B5A7443194B5F469189D3AF6", 00:14:20.259 "uuid": "c8ed6d15-b5a7-4431-94b5-f469189d3af6", 00:14:20.259 "no_auto_visible": false 00:14:20.259 } 00:14:20.259 } 00:14:20.259 }, 00:14:20.259 { 00:14:20.259 "method": "nvmf_subsystem_add_listener", 00:14:20.259 "params": { 00:14:20.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.259 "listen_address": { 00:14:20.259 "trtype": "TCP", 00:14:20.259 "adrfam": "IPv4", 00:14:20.259 "traddr": "10.0.0.3", 00:14:20.259 "trsvcid": "4420" 00:14:20.259 }, 00:14:20.259 "secure_channel": true 00:14:20.259 } 00:14:20.259 } 00:14:20.259 ] 00:14:20.259 } 00:14:20.259 ] 00:14:20.259 }' 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72595 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72595 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72595 ']' 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.259 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.259 [2024-10-08 09:20:11.879619] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:20.259 [2024-10-08 09:20:11.879729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.518 [2024-10-08 09:20:12.013971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.518 [2024-10-08 09:20:12.094796] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.518 [2024-10-08 09:20:12.094845] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.518 [2024-10-08 09:20:12.094871] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.518 [2024-10-08 09:20:12.094878] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.518 [2024-10-08 09:20:12.094884] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.518 [2024-10-08 09:20:12.095317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.776 [2024-10-08 09:20:12.262033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.776 [2024-10-08 09:20:12.340687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.776 [2024-10-08 09:20:12.379818] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:20.776 [2024-10-08 09:20:12.380055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72627 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72627 /var/tmp/bdevperf.sock 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72627 ']' 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:21.344 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:21.344 "subsystems": [ 00:14:21.344 { 00:14:21.344 "subsystem": "keyring", 00:14:21.344 "config": [ 00:14:21.344 { 00:14:21.344 "method": "keyring_file_add_key", 00:14:21.344 "params": { 00:14:21.344 "name": "key0", 00:14:21.344 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:21.344 } 00:14:21.344 } 00:14:21.344 ] 00:14:21.344 }, 00:14:21.344 { 00:14:21.344 "subsystem": "iobuf", 00:14:21.344 "config": [ 00:14:21.344 { 00:14:21.344 "method": "iobuf_set_options", 00:14:21.344 "params": { 00:14:21.344 "small_pool_count": 8192, 00:14:21.344 "large_pool_count": 1024, 00:14:21.344 "small_bufsize": 8192, 00:14:21.344 "large_bufsize": 135168 00:14:21.344 } 00:14:21.344 } 00:14:21.344 ] 00:14:21.344 }, 00:14:21.344 { 00:14:21.344 "subsystem": "sock", 00:14:21.344 "config": [ 00:14:21.344 { 00:14:21.344 "method": "sock_set_default_impl", 00:14:21.344 "params": { 00:14:21.344 "impl_name": "uring" 00:14:21.344 } 00:14:21.344 }, 00:14:21.344 { 00:14:21.344 "method": "sock_impl_set_options", 00:14:21.344 "params": { 00:14:21.344 "impl_name": "ssl", 00:14:21.344 "recv_buf_size": 4096, 00:14:21.344 "send_buf_size": 4096, 00:14:21.344 "enable_recv_pipe": true, 00:14:21.344 "enable_quickack": false, 00:14:21.344 "enable_placement_id": 0, 00:14:21.344 "enable_zerocopy_send_server": true, 00:14:21.344 "enable_zerocopy_send_client": false, 00:14:21.344 "zerocopy_threshold": 0, 00:14:21.344 "tls_version": 0, 00:14:21.344 "enable_ktls": false 00:14:21.344 } 00:14:21.344 }, 00:14:21.344 { 00:14:21.344 "method": "sock_impl_set_options", 00:14:21.344 "params": { 00:14:21.344 "impl_name": "posix", 00:14:21.344 "recv_buf_size": 2097152, 00:14:21.344 "send_buf_size": 2097152, 00:14:21.344 "enable_recv_pipe": true, 00:14:21.344 "enable_quickack": false, 00:14:21.344 "enable_placement_id": 0, 00:14:21.344 "enable_zerocopy_send_server": true, 00:14:21.344 "enable_zerocopy_send_client": false, 00:14:21.344 "zerocopy_threshold": 0, 00:14:21.344 "tls_version": 0, 00:14:21.344 "enable_ktls": false 00:14:21.344 } 00:14:21.344 }, 00:14:21.344 { 00:14:21.344 "method": "sock_impl_set_options", 00:14:21.344 "params": { 00:14:21.344 "impl_name": "uring", 00:14:21.345 "recv_buf_size": 2097152, 00:14:21.345 "send_buf_size": 2097152, 00:14:21.345 "enable_recv_pipe": true, 00:14:21.345 "enable_quickack": false, 00:14:21.345 "enable_placement_id": 0, 00:14:21.345 "enable_zerocopy_send_server": false, 00:14:21.345 "enable_zerocopy_send_client": false, 00:14:21.345 "zerocopy_threshold": 0, 00:14:21.345 "tls_version": 0, 00:14:21.345 "enable_ktls": false 00:14:21.345 } 00:14:21.345 } 00:14:21.345 ] 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "subsystem": "vmd", 00:14:21.345 "config": [] 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "subsystem": "accel", 00:14:21.345 "config": [ 00:14:21.345 { 00:14:21.345 "method": "accel_set_options", 00:14:21.345 "params": { 00:14:21.345 "small_cache_size": 128, 00:14:21.345 "large_cache_size": 16, 00:14:21.345 "task_count": 2048, 00:14:21.345 "sequence_count": 2048, 00:14:21.345 "buf_count": 2048 00:14:21.345 } 00:14:21.345 } 00:14:21.345 ] 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "subsystem": "bdev", 00:14:21.345 "config": [ 00:14:21.345 { 00:14:21.345 "method": "bdev_set_options", 00:14:21.345 "params": { 00:14:21.345 "bdev_io_pool_size": 65535, 00:14:21.345 "bdev_io_cache_size": 256, 00:14:21.345 "bdev_auto_examine": true, 00:14:21.345 "iobuf_small_cache_size": 128, 00:14:21.345 "iobuf_large_cache_size": 16 00:14:21.345 } 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "method": "bdev_raid_set_options", 00:14:21.345 "params": { 00:14:21.345 "process_window_size_kb": 1024, 00:14:21.345 "process_max_bandwidth_mb_sec": 0 00:14:21.345 } 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "method": "bdev_iscsi_set_options", 00:14:21.345 "params": { 00:14:21.345 "timeout_sec": 30 00:14:21.345 } 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "method": "bdev_nvme_set_options", 00:14:21.345 "params": { 00:14:21.345 "action_on_timeout": "none", 00:14:21.345 "timeout_us": 0, 00:14:21.345 "timeout_admin_us": 0, 00:14:21.345 "keep_alive_timeout_ms": 10000, 00:14:21.345 "arbitration_burst": 0, 00:14:21.345 "low_priority_weight": 0, 00:14:21.345 "medium_priority_weight": 0, 00:14:21.345 "high_priority_weight": 0, 00:14:21.345 "nvme_adminq_poll_period_us": 10000, 00:14:21.345 "nvme_ioq_poll_period_us": 0, 00:14:21.345 "io_queue_requests": 512, 00:14:21.345 "delay_cmd_submit": true, 00:14:21.345 "transport_retry_count": 4, 00:14:21.345 "bdev_retry_count": 3, 00:14:21.345 "transport_ack_timeout": 0, 00:14:21.345 "ctrlr_loss_timeout_sec": 0, 00:14:21.345 "reconnect_delay_sec": 0, 00:14:21.345 "fast_io_fail_timeout_sec": 0, 00:14:21.345 "disable_auto_failback": false, 00:14:21.345 "generate_uuids": false, 00:14:21.345 "transport_tos": 0, 00:14:21.345 "nvme_error_stat": false, 00:14:21.345 "rdma_srq_size": 0, 00:14:21.345 "io_path_stat": false, 00:14:21.345 "allow_accel_sequence": false, 00:14:21.345 "rdma_max_cq_size": 0, 00:14:21.345 "rdma_cm_event_timeout_ms": 0, 00:14:21.345 "dhchap_digests": [ 00:14:21.345 "sha256", 00:14:21.345 "sha384", 00:14:21.345 "sha512" 00:14:21.345 ], 00:14:21.345 "dhchap_dhgroups": [ 00:14:21.345 "null", 00:14:21.345 "ffdhe2048", 00:14:21.345 "ffdhe3072", 00:14:21.345 "ffdhe4096", 00:14:21.345 "ffdhe6144", 00:14:21.345 "ffdhe8192" 00:14:21.345 ] 00:14:21.345 } 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "method": "bdev_nvme_attach_controller", 00:14:21.345 "params": { 00:14:21.345 "name": "TLSTEST", 00:14:21.345 "trtype": "TCP", 00:14:21.345 "adrfam": "IPv4", 00:14:21.345 "traddr": "10.0.0.3", 00:14:21.345 "trsvcid": "4420", 00:14:21.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.345 "prchk_reftag": false, 00:14:21.345 "prchk_guard": false, 00:14:21.345 "ctrlr_loss_timeout_sec": 0, 00:14:21.345 "reconnect_delay_sec": 0, 00:14:21.345 "fast_io_fail_timeout_sec": 0, 00:14:21.345 "psk": "key0", 00:14:21.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.345 "hdgst": false, 00:14:21.345 "ddgst": false, 00:14:21.345 "multipath": "multipath" 00:14:21.345 } 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "method": "bdev_nvme_set_hotplug", 00:14:21.345 "params": { 00:14:21.345 "period_us": 100000, 00:14:21.345 "enable": false 00:14:21.345 } 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "method": "bdev_wait_for_examine" 00:14:21.345 } 00:14:21.345 ] 00:14:21.345 }, 00:14:21.345 { 00:14:21.345 "subsystem": "nbd", 00:14:21.345 "config": [] 00:14:21.345 } 00:14:21.345 ] 00:14:21.345 }' 00:14:21.345 [2024-10-08 09:20:12.952766] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:21.345 [2024-10-08 09:20:12.952870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72627 ] 00:14:21.604 [2024-10-08 09:20:13.088269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.604 [2024-10-08 09:20:13.201431] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.863 [2024-10-08 09:20:13.341184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.863 [2024-10-08 09:20:13.387884] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.430 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.430 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:22.430 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:22.430 Running I/O for 10 seconds... 00:14:24.743 4233.00 IOPS, 16.54 MiB/s [2024-10-08T09:20:17.362Z] 4303.50 IOPS, 16.81 MiB/s [2024-10-08T09:20:18.297Z] 4373.67 IOPS, 17.08 MiB/s [2024-10-08T09:20:19.260Z] 4372.00 IOPS, 17.08 MiB/s [2024-10-08T09:20:20.194Z] 4377.60 IOPS, 17.10 MiB/s [2024-10-08T09:20:21.130Z] 4416.50 IOPS, 17.25 MiB/s [2024-10-08T09:20:22.505Z] 4442.86 IOPS, 17.35 MiB/s [2024-10-08T09:20:23.441Z] 4467.25 IOPS, 17.45 MiB/s [2024-10-08T09:20:24.376Z] 4452.44 IOPS, 17.39 MiB/s [2024-10-08T09:20:24.376Z] 4439.60 IOPS, 17.34 MiB/s 00:14:32.693 Latency(us) 00:14:32.693 [2024-10-08T09:20:24.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.693 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:32.693 Verification LBA range: start 0x0 length 0x2000 00:14:32.693 TLSTESTn1 : 10.02 4444.83 17.36 0.00 0.00 28745.83 5570.56 23116.33 00:14:32.693 [2024-10-08T09:20:24.376Z] =================================================================================================================== 00:14:32.693 [2024-10-08T09:20:24.376Z] Total : 4444.83 17.36 0.00 0.00 28745.83 5570.56 23116.33 00:14:32.693 { 00:14:32.693 "results": [ 00:14:32.693 { 00:14:32.693 "job": "TLSTESTn1", 00:14:32.693 "core_mask": "0x4", 00:14:32.693 "workload": "verify", 00:14:32.693 "status": "finished", 00:14:32.693 "verify_range": { 00:14:32.693 "start": 0, 00:14:32.693 "length": 8192 00:14:32.693 }, 00:14:32.693 "queue_depth": 128, 00:14:32.693 "io_size": 4096, 00:14:32.693 "runtime": 10.016589, 00:14:32.693 "iops": 4444.8264773567125, 00:14:32.693 "mibps": 17.36260342717466, 00:14:32.693 "io_failed": 0, 00:14:32.693 "io_timeout": 0, 00:14:32.693 "avg_latency_us": 28745.829734676627, 00:14:32.693 "min_latency_us": 5570.56, 00:14:32.693 "max_latency_us": 23116.334545454545 00:14:32.693 } 00:14:32.693 ], 00:14:32.693 "core_count": 1 00:14:32.693 } 00:14:32.693 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:32.693 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72627 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72627 ']' 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72627 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72627 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:32.694 killing process with pid 72627 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72627' 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72627 00:14:32.694 Received shutdown signal, test time was about 10.000000 seconds 00:14:32.694 00:14:32.694 Latency(us) 00:14:32.694 [2024-10-08T09:20:24.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.694 [2024-10-08T09:20:24.377Z] =================================================================================================================== 00:14:32.694 [2024-10-08T09:20:24.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.694 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72627 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72595 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72595 ']' 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72595 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72595 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:32.953 killing process with pid 72595 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72595' 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72595 00:14:32.953 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72595 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72770 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72770 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72770 ']' 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.212 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.212 [2024-10-08 09:20:24.734796] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:33.212 [2024-10-08 09:20:24.734916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.212 [2024-10-08 09:20:24.878083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.487 [2024-10-08 09:20:25.001822] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.487 [2024-10-08 09:20:25.001904] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.487 [2024-10-08 09:20:25.001931] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.487 [2024-10-08 09:20:25.001942] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.487 [2024-10-08 09:20:25.001951] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.487 [2024-10-08 09:20:25.002461] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.487 [2024-10-08 09:20:25.060457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.cV4vmCYsGW 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cV4vmCYsGW 00:14:34.064 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:34.322 [2024-10-08 09:20:26.004476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.581 09:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:34.839 09:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:35.098 [2024-10-08 09:20:26.608652] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:35.098 [2024-10-08 09:20:26.608934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:35.098 09:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:35.357 malloc0 00:14:35.357 09:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:35.615 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:14:35.873 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72827 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72827 /var/tmp/bdevperf.sock 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72827 ']' 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.132 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.132 [2024-10-08 09:20:27.695031] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:36.132 [2024-10-08 09:20:27.695154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72827 ] 00:14:36.391 [2024-10-08 09:20:27.833639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.391 [2024-10-08 09:20:27.936165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.391 [2024-10-08 09:20:27.995211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:37.326 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.326 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:37.326 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:14:37.326 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:37.584 [2024-10-08 09:20:29.128689] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.584 nvme0n1 00:14:37.584 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.841 Running I/O for 1 seconds... 00:14:38.775 4224.00 IOPS, 16.50 MiB/s 00:14:38.775 Latency(us) 00:14:38.775 [2024-10-08T09:20:30.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:38.775 Verification LBA range: start 0x0 length 0x2000 00:14:38.775 nvme0n1 : 1.03 4227.29 16.51 0.00 0.00 29961.01 10962.39 22520.55 00:14:38.775 [2024-10-08T09:20:30.458Z] =================================================================================================================== 00:14:38.775 [2024-10-08T09:20:30.458Z] Total : 4227.29 16.51 0.00 0.00 29961.01 10962.39 22520.55 00:14:38.775 { 00:14:38.775 "results": [ 00:14:38.775 { 00:14:38.775 "job": "nvme0n1", 00:14:38.775 "core_mask": "0x2", 00:14:38.775 "workload": "verify", 00:14:38.775 "status": "finished", 00:14:38.775 "verify_range": { 00:14:38.775 "start": 0, 00:14:38.775 "length": 8192 00:14:38.775 }, 00:14:38.775 "queue_depth": 128, 00:14:38.775 "io_size": 4096, 00:14:38.775 "runtime": 1.029502, 00:14:38.775 "iops": 4227.286590992539, 00:14:38.775 "mibps": 16.512838246064604, 00:14:38.775 "io_failed": 0, 00:14:38.775 "io_timeout": 0, 00:14:38.775 "avg_latency_us": 29961.014759358288, 00:14:38.775 "min_latency_us": 10962.385454545454, 00:14:38.775 "max_latency_us": 22520.552727272727 00:14:38.775 } 00:14:38.775 ], 00:14:38.775 "core_count": 1 00:14:38.775 } 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72827 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72827 ']' 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72827 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72827 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:38.775 killing process with pid 72827 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72827' 00:14:38.775 Received shutdown signal, test time was about 1.000000 seconds 00:14:38.775 00:14:38.775 Latency(us) 00:14:38.775 [2024-10-08T09:20:30.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.775 [2024-10-08T09:20:30.458Z] =================================================================================================================== 00:14:38.775 [2024-10-08T09:20:30.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72827 00:14:38.775 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72827 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72770 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72770 ']' 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72770 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72770 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.036 killing process with pid 72770 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72770' 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72770 00:14:39.036 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72770 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72878 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72878 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72878 ']' 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.298 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.298 [2024-10-08 09:20:30.965007] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:39.298 [2024-10-08 09:20:30.965126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.557 [2024-10-08 09:20:31.093896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.557 [2024-10-08 09:20:31.169834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.557 [2024-10-08 09:20:31.169906] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.557 [2024-10-08 09:20:31.169933] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.557 [2024-10-08 09:20:31.169941] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.557 [2024-10-08 09:20:31.169947] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.557 [2024-10-08 09:20:31.170375] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.557 [2024-10-08 09:20:31.223510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.816 [2024-10-08 09:20:31.329667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.816 malloc0 00:14:39.816 [2024-10-08 09:20:31.369625] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:39.816 [2024-10-08 09:20:31.369859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72897 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72897 /var/tmp/bdevperf.sock 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72897 ']' 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.816 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.816 [2024-10-08 09:20:31.459848] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:39.816 [2024-10-08 09:20:31.459958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72897 ] 00:14:40.075 [2024-10-08 09:20:31.599368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.075 [2024-10-08 09:20:31.689271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.075 [2024-10-08 09:20:31.747326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.010 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.010 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:41.010 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cV4vmCYsGW 00:14:41.268 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:41.526 [2024-10-08 09:20:32.954628] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:41.526 nvme0n1 00:14:41.526 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:41.526 Running I/O for 1 seconds... 00:14:42.720 4352.00 IOPS, 17.00 MiB/s 00:14:42.720 Latency(us) 00:14:42.720 [2024-10-08T09:20:34.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.720 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:42.720 Verification LBA range: start 0x0 length 0x2000 00:14:42.720 nvme0n1 : 1.03 4356.51 17.02 0.00 0.00 29093.00 6374.87 18350.08 00:14:42.720 [2024-10-08T09:20:34.403Z] =================================================================================================================== 00:14:42.720 [2024-10-08T09:20:34.403Z] Total : 4356.51 17.02 0.00 0.00 29093.00 6374.87 18350.08 00:14:42.720 { 00:14:42.720 "results": [ 00:14:42.720 { 00:14:42.720 "job": "nvme0n1", 00:14:42.720 "core_mask": "0x2", 00:14:42.720 "workload": "verify", 00:14:42.720 "status": "finished", 00:14:42.720 "verify_range": { 00:14:42.720 "start": 0, 00:14:42.720 "length": 8192 00:14:42.720 }, 00:14:42.720 "queue_depth": 128, 00:14:42.720 "io_size": 4096, 00:14:42.720 "runtime": 1.028346, 00:14:42.720 "iops": 4356.510357408888, 00:14:42.720 "mibps": 17.01761858362847, 00:14:42.720 "io_failed": 0, 00:14:42.720 "io_timeout": 0, 00:14:42.720 "avg_latency_us": 29092.996987012986, 00:14:42.720 "min_latency_us": 6374.865454545455, 00:14:42.720 "max_latency_us": 18350.08 00:14:42.720 } 00:14:42.720 ], 00:14:42.720 "core_count": 1 00:14:42.721 } 00:14:42.721 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:42.721 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.721 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.721 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.721 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:42.721 "subsystems": [ 00:14:42.721 { 00:14:42.721 "subsystem": "keyring", 00:14:42.721 "config": [ 00:14:42.721 { 00:14:42.721 "method": "keyring_file_add_key", 00:14:42.721 "params": { 00:14:42.721 "name": "key0", 00:14:42.721 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:42.721 } 00:14:42.721 } 00:14:42.721 ] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "iobuf", 00:14:42.721 "config": [ 00:14:42.721 { 00:14:42.721 "method": "iobuf_set_options", 00:14:42.721 "params": { 00:14:42.721 "small_pool_count": 8192, 00:14:42.721 "large_pool_count": 1024, 00:14:42.721 "small_bufsize": 8192, 00:14:42.721 "large_bufsize": 135168 00:14:42.721 } 00:14:42.721 } 00:14:42.721 ] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "sock", 00:14:42.721 "config": [ 00:14:42.721 { 00:14:42.721 "method": "sock_set_default_impl", 00:14:42.721 "params": { 00:14:42.721 "impl_name": "uring" 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "sock_impl_set_options", 00:14:42.721 "params": { 00:14:42.721 "impl_name": "ssl", 00:14:42.721 "recv_buf_size": 4096, 00:14:42.721 "send_buf_size": 4096, 00:14:42.721 "enable_recv_pipe": true, 00:14:42.721 "enable_quickack": false, 00:14:42.721 "enable_placement_id": 0, 00:14:42.721 "enable_zerocopy_send_server": true, 00:14:42.721 "enable_zerocopy_send_client": false, 00:14:42.721 "zerocopy_threshold": 0, 00:14:42.721 "tls_version": 0, 00:14:42.721 "enable_ktls": false 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "sock_impl_set_options", 00:14:42.721 "params": { 00:14:42.721 "impl_name": "posix", 00:14:42.721 "recv_buf_size": 2097152, 00:14:42.721 "send_buf_size": 2097152, 00:14:42.721 "enable_recv_pipe": true, 00:14:42.721 "enable_quickack": false, 00:14:42.721 "enable_placement_id": 0, 00:14:42.721 "enable_zerocopy_send_server": true, 00:14:42.721 "enable_zerocopy_send_client": false, 00:14:42.721 "zerocopy_threshold": 0, 00:14:42.721 "tls_version": 0, 00:14:42.721 "enable_ktls": false 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "sock_impl_set_options", 00:14:42.721 "params": { 00:14:42.721 "impl_name": "uring", 00:14:42.721 "recv_buf_size": 2097152, 00:14:42.721 "send_buf_size": 2097152, 00:14:42.721 "enable_recv_pipe": true, 00:14:42.721 "enable_quickack": false, 00:14:42.721 "enable_placement_id": 0, 00:14:42.721 "enable_zerocopy_send_server": false, 00:14:42.721 "enable_zerocopy_send_client": false, 00:14:42.721 "zerocopy_threshold": 0, 00:14:42.721 "tls_version": 0, 00:14:42.721 "enable_ktls": false 00:14:42.721 } 00:14:42.721 } 00:14:42.721 ] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "vmd", 00:14:42.721 "config": [] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "accel", 00:14:42.721 "config": [ 00:14:42.721 { 00:14:42.721 "method": "accel_set_options", 00:14:42.721 "params": { 00:14:42.721 "small_cache_size": 128, 00:14:42.721 "large_cache_size": 16, 00:14:42.721 "task_count": 2048, 00:14:42.721 "sequence_count": 2048, 00:14:42.721 "buf_count": 2048 00:14:42.721 } 00:14:42.721 } 00:14:42.721 ] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "bdev", 00:14:42.721 "config": [ 00:14:42.721 { 00:14:42.721 "method": "bdev_set_options", 00:14:42.721 "params": { 00:14:42.721 "bdev_io_pool_size": 65535, 00:14:42.721 "bdev_io_cache_size": 256, 00:14:42.721 "bdev_auto_examine": true, 00:14:42.721 "iobuf_small_cache_size": 128, 00:14:42.721 "iobuf_large_cache_size": 16 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "bdev_raid_set_options", 00:14:42.721 "params": { 00:14:42.721 "process_window_size_kb": 1024, 00:14:42.721 "process_max_bandwidth_mb_sec": 0 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "bdev_iscsi_set_options", 00:14:42.721 "params": { 00:14:42.721 "timeout_sec": 30 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "bdev_nvme_set_options", 00:14:42.721 "params": { 00:14:42.721 "action_on_timeout": "none", 00:14:42.721 "timeout_us": 0, 00:14:42.721 "timeout_admin_us": 0, 00:14:42.721 "keep_alive_timeout_ms": 10000, 00:14:42.721 "arbitration_burst": 0, 00:14:42.721 "low_priority_weight": 0, 00:14:42.721 "medium_priority_weight": 0, 00:14:42.721 "high_priority_weight": 0, 00:14:42.721 "nvme_adminq_poll_period_us": 10000, 00:14:42.721 "nvme_ioq_poll_period_us": 0, 00:14:42.721 "io_queue_requests": 0, 00:14:42.721 "delay_cmd_submit": true, 00:14:42.721 "transport_retry_count": 4, 00:14:42.721 "bdev_retry_count": 3, 00:14:42.721 "transport_ack_timeout": 0, 00:14:42.721 "ctrlr_loss_timeout_sec": 0, 00:14:42.721 "reconnect_delay_sec": 0, 00:14:42.721 "fast_io_fail_timeout_sec": 0, 00:14:42.721 "disable_auto_failback": false, 00:14:42.721 "generate_uuids": false, 00:14:42.721 "transport_tos": 0, 00:14:42.721 "nvme_error_stat": false, 00:14:42.721 "rdma_srq_size": 0, 00:14:42.721 "io_path_stat": false, 00:14:42.721 "allow_accel_sequence": false, 00:14:42.721 "rdma_max_cq_size": 0, 00:14:42.721 "rdma_cm_event_timeout_ms": 0, 00:14:42.721 "dhchap_digests": [ 00:14:42.721 "sha256", 00:14:42.721 "sha384", 00:14:42.721 "sha512" 00:14:42.721 ], 00:14:42.721 "dhchap_dhgroups": [ 00:14:42.721 "null", 00:14:42.721 "ffdhe2048", 00:14:42.721 "ffdhe3072", 00:14:42.721 "ffdhe4096", 00:14:42.721 "ffdhe6144", 00:14:42.721 "ffdhe8192" 00:14:42.721 ] 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "bdev_nvme_set_hotplug", 00:14:42.721 "params": { 00:14:42.721 "period_us": 100000, 00:14:42.721 "enable": false 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "bdev_malloc_create", 00:14:42.721 "params": { 00:14:42.721 "name": "malloc0", 00:14:42.721 "num_blocks": 8192, 00:14:42.721 "block_size": 4096, 00:14:42.721 "physical_block_size": 4096, 00:14:42.721 "uuid": "4e8e1d05-a1ba-4401-b320-d133efef80cd", 00:14:42.721 "optimal_io_boundary": 0, 00:14:42.721 "md_size": 0, 00:14:42.721 "dif_type": 0, 00:14:42.721 "dif_is_head_of_md": false, 00:14:42.721 "dif_pi_format": 0 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "bdev_wait_for_examine" 00:14:42.721 } 00:14:42.721 ] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "nbd", 00:14:42.721 "config": [] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "scheduler", 00:14:42.721 "config": [ 00:14:42.721 { 00:14:42.721 "method": "framework_set_scheduler", 00:14:42.721 "params": { 00:14:42.721 "name": "static" 00:14:42.721 } 00:14:42.721 } 00:14:42.721 ] 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "subsystem": "nvmf", 00:14:42.721 "config": [ 00:14:42.721 { 00:14:42.721 "method": "nvmf_set_config", 00:14:42.721 "params": { 00:14:42.721 "discovery_filter": "match_any", 00:14:42.721 "admin_cmd_passthru": { 00:14:42.721 "identify_ctrlr": false 00:14:42.721 }, 00:14:42.721 "dhchap_digests": [ 00:14:42.721 "sha256", 00:14:42.721 "sha384", 00:14:42.721 "sha512" 00:14:42.721 ], 00:14:42.721 "dhchap_dhgroups": [ 00:14:42.721 "null", 00:14:42.721 "ffdhe2048", 00:14:42.721 "ffdhe3072", 00:14:42.721 "ffdhe4096", 00:14:42.721 "ffdhe6144", 00:14:42.721 "ffdhe8192" 00:14:42.721 ] 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "nvmf_set_max_subsystems", 00:14:42.721 "params": { 00:14:42.721 "max_subsystems": 1024 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "nvmf_set_crdt", 00:14:42.721 "params": { 00:14:42.721 "crdt1": 0, 00:14:42.721 "crdt2": 0, 00:14:42.721 "crdt3": 0 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "nvmf_create_transport", 00:14:42.721 "params": { 00:14:42.721 "trtype": "TCP", 00:14:42.721 "max_queue_depth": 128, 00:14:42.721 "max_io_qpairs_per_ctrlr": 127, 00:14:42.721 "in_capsule_data_size": 4096, 00:14:42.721 "max_io_size": 131072, 00:14:42.721 "io_unit_size": 131072, 00:14:42.721 "max_aq_depth": 128, 00:14:42.721 "num_shared_buffers": 511, 00:14:42.721 "buf_cache_size": 4294967295, 00:14:42.721 "dif_insert_or_strip": false, 00:14:42.721 "zcopy": false, 00:14:42.721 "c2h_success": false, 00:14:42.721 "sock_priority": 0, 00:14:42.721 "abort_timeout_sec": 1, 00:14:42.721 "ack_timeout": 0, 00:14:42.721 "data_wr_pool_size": 0 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "nvmf_create_subsystem", 00:14:42.721 "params": { 00:14:42.721 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.721 "allow_any_host": false, 00:14:42.721 "serial_number": "00000000000000000000", 00:14:42.721 "model_number": "SPDK bdev Controller", 00:14:42.721 "max_namespaces": 32, 00:14:42.721 "min_cntlid": 1, 00:14:42.721 "max_cntlid": 65519, 00:14:42.721 "ana_reporting": false 00:14:42.721 } 00:14:42.721 }, 00:14:42.721 { 00:14:42.721 "method": "nvmf_subsystem_add_host", 00:14:42.721 "params": { 00:14:42.721 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.722 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.722 "psk": "key0" 00:14:42.722 } 00:14:42.722 }, 00:14:42.722 { 00:14:42.722 "method": "nvmf_subsystem_add_ns", 00:14:42.722 "params": { 00:14:42.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.722 "namespace": { 00:14:42.722 "nsid": 1, 00:14:42.722 "bdev_name": "malloc0", 00:14:42.722 "nguid": "4E8E1D05A1BA4401B320D133EFEF80CD", 00:14:42.722 "uuid": "4e8e1d05-a1ba-4401-b320-d133efef80cd", 00:14:42.722 "no_auto_visible": false 00:14:42.722 } 00:14:42.722 } 00:14:42.722 }, 00:14:42.722 { 00:14:42.722 "method": "nvmf_subsystem_add_listener", 00:14:42.722 "params": { 00:14:42.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.722 "listen_address": { 00:14:42.722 "trtype": "TCP", 00:14:42.722 "adrfam": "IPv4", 00:14:42.722 "traddr": "10.0.0.3", 00:14:42.722 "trsvcid": "4420" 00:14:42.722 }, 00:14:42.722 "secure_channel": false, 00:14:42.722 "sock_impl": "ssl" 00:14:42.722 } 00:14:42.722 } 00:14:42.722 ] 00:14:42.722 } 00:14:42.722 ] 00:14:42.722 }' 00:14:42.722 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:42.981 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:42.981 "subsystems": [ 00:14:42.981 { 00:14:42.981 "subsystem": "keyring", 00:14:42.981 "config": [ 00:14:42.981 { 00:14:42.981 "method": "keyring_file_add_key", 00:14:42.981 "params": { 00:14:42.981 "name": "key0", 00:14:42.981 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:42.981 } 00:14:42.981 } 00:14:42.981 ] 00:14:42.981 }, 00:14:42.981 { 00:14:42.981 "subsystem": "iobuf", 00:14:42.981 "config": [ 00:14:42.981 { 00:14:42.981 "method": "iobuf_set_options", 00:14:42.981 "params": { 00:14:42.981 "small_pool_count": 8192, 00:14:42.981 "large_pool_count": 1024, 00:14:42.981 "small_bufsize": 8192, 00:14:42.981 "large_bufsize": 135168 00:14:42.981 } 00:14:42.981 } 00:14:42.981 ] 00:14:42.981 }, 00:14:42.981 { 00:14:42.981 "subsystem": "sock", 00:14:42.981 "config": [ 00:14:42.981 { 00:14:42.981 "method": "sock_set_default_impl", 00:14:42.981 "params": { 00:14:42.981 "impl_name": "uring" 00:14:42.981 } 00:14:42.981 }, 00:14:42.981 { 00:14:42.981 "method": "sock_impl_set_options", 00:14:42.981 "params": { 00:14:42.981 "impl_name": "ssl", 00:14:42.981 "recv_buf_size": 4096, 00:14:42.981 "send_buf_size": 4096, 00:14:42.981 "enable_recv_pipe": true, 00:14:42.981 "enable_quickack": false, 00:14:42.981 "enable_placement_id": 0, 00:14:42.981 "enable_zerocopy_send_server": true, 00:14:42.981 "enable_zerocopy_send_client": false, 00:14:42.982 "zerocopy_threshold": 0, 00:14:42.982 "tls_version": 0, 00:14:42.982 "enable_ktls": false 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "sock_impl_set_options", 00:14:42.982 "params": { 00:14:42.982 "impl_name": "posix", 00:14:42.982 "recv_buf_size": 2097152, 00:14:42.982 "send_buf_size": 2097152, 00:14:42.982 "enable_recv_pipe": true, 00:14:42.982 "enable_quickack": false, 00:14:42.982 "enable_placement_id": 0, 00:14:42.982 "enable_zerocopy_send_server": true, 00:14:42.982 "enable_zerocopy_send_client": false, 00:14:42.982 "zerocopy_threshold": 0, 00:14:42.982 "tls_version": 0, 00:14:42.982 "enable_ktls": false 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "sock_impl_set_options", 00:14:42.982 "params": { 00:14:42.982 "impl_name": "uring", 00:14:42.982 "recv_buf_size": 2097152, 00:14:42.982 "send_buf_size": 2097152, 00:14:42.982 "enable_recv_pipe": true, 00:14:42.982 "enable_quickack": false, 00:14:42.982 "enable_placement_id": 0, 00:14:42.982 "enable_zerocopy_send_server": false, 00:14:42.982 "enable_zerocopy_send_client": false, 00:14:42.982 "zerocopy_threshold": 0, 00:14:42.982 "tls_version": 0, 00:14:42.982 "enable_ktls": false 00:14:42.982 } 00:14:42.982 } 00:14:42.982 ] 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "subsystem": "vmd", 00:14:42.982 "config": [] 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "subsystem": "accel", 00:14:42.982 "config": [ 00:14:42.982 { 00:14:42.982 "method": "accel_set_options", 00:14:42.982 "params": { 00:14:42.982 "small_cache_size": 128, 00:14:42.982 "large_cache_size": 16, 00:14:42.982 "task_count": 2048, 00:14:42.982 "sequence_count": 2048, 00:14:42.982 "buf_count": 2048 00:14:42.982 } 00:14:42.982 } 00:14:42.982 ] 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "subsystem": "bdev", 00:14:42.982 "config": [ 00:14:42.982 { 00:14:42.982 "method": "bdev_set_options", 00:14:42.982 "params": { 00:14:42.982 "bdev_io_pool_size": 65535, 00:14:42.982 "bdev_io_cache_size": 256, 00:14:42.982 "bdev_auto_examine": true, 00:14:42.982 "iobuf_small_cache_size": 128, 00:14:42.982 "iobuf_large_cache_size": 16 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "bdev_raid_set_options", 00:14:42.982 "params": { 00:14:42.982 "process_window_size_kb": 1024, 00:14:42.982 "process_max_bandwidth_mb_sec": 0 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "bdev_iscsi_set_options", 00:14:42.982 "params": { 00:14:42.982 "timeout_sec": 30 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "bdev_nvme_set_options", 00:14:42.982 "params": { 00:14:42.982 "action_on_timeout": "none", 00:14:42.982 "timeout_us": 0, 00:14:42.982 "timeout_admin_us": 0, 00:14:42.982 "keep_alive_timeout_ms": 10000, 00:14:42.982 "arbitration_burst": 0, 00:14:42.982 "low_priority_weight": 0, 00:14:42.982 "medium_priority_weight": 0, 00:14:42.982 "high_priority_weight": 0, 00:14:42.982 "nvme_adminq_poll_period_us": 10000, 00:14:42.982 "nvme_ioq_poll_period_us": 0, 00:14:42.982 "io_queue_requests": 512, 00:14:42.982 "delay_cmd_submit": true, 00:14:42.982 "transport_retry_count": 4, 00:14:42.982 "bdev_retry_count": 3, 00:14:42.982 "transport_ack_timeout": 0, 00:14:42.982 "ctrlr_loss_timeout_sec": 0, 00:14:42.982 "reconnect_delay_sec": 0, 00:14:42.982 "fast_io_fail_timeout_sec": 0, 00:14:42.982 "disable_auto_failback": false, 00:14:42.982 "generate_uuids": false, 00:14:42.982 "transport_tos": 0, 00:14:42.982 "nvme_error_stat": false, 00:14:42.982 "rdma_srq_size": 0, 00:14:42.982 "io_path_stat": false, 00:14:42.982 "allow_accel_sequence": false, 00:14:42.982 "rdma_max_cq_size": 0, 00:14:42.982 "rdma_cm_event_timeout_ms": 0, 00:14:42.982 "dhchap_digests": [ 00:14:42.982 "sha256", 00:14:42.982 "sha384", 00:14:42.982 "sha512" 00:14:42.982 ], 00:14:42.982 "dhchap_dhgroups": [ 00:14:42.982 "null", 00:14:42.982 "ffdhe2048", 00:14:42.982 "ffdhe3072", 00:14:42.982 "ffdhe4096", 00:14:42.982 "ffdhe6144", 00:14:42.982 "ffdhe8192" 00:14:42.982 ] 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "bdev_nvme_attach_controller", 00:14:42.982 "params": { 00:14:42.982 "name": "nvme0", 00:14:42.982 "trtype": "TCP", 00:14:42.982 "adrfam": "IPv4", 00:14:42.982 "traddr": "10.0.0.3", 00:14:42.982 "trsvcid": "4420", 00:14:42.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.982 "prchk_reftag": false, 00:14:42.982 "prchk_guard": false, 00:14:42.982 "ctrlr_loss_timeout_sec": 0, 00:14:42.982 "reconnect_delay_sec": 0, 00:14:42.982 "fast_io_fail_timeout_sec": 0, 00:14:42.982 "psk": "key0", 00:14:42.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.982 "hdgst": false, 00:14:42.982 "ddgst": false, 00:14:42.982 "multipath": "multipath" 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "bdev_nvme_set_hotplug", 00:14:42.982 "params": { 00:14:42.982 "period_us": 100000, 00:14:42.982 "enable": false 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "bdev_enable_histogram", 00:14:42.982 "params": { 00:14:42.982 "name": "nvme0n1", 00:14:42.982 "enable": true 00:14:42.982 } 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "method": "bdev_wait_for_examine" 00:14:42.982 } 00:14:42.982 ] 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "subsystem": "nbd", 00:14:42.982 "config": [] 00:14:42.982 } 00:14:42.982 ] 00:14:42.982 }' 00:14:42.982 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72897 00:14:42.982 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72897 ']' 00:14:42.982 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72897 00:14:42.982 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:42.982 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.241 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72897 00:14:43.241 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:43.241 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:43.241 killing process with pid 72897 00:14:43.241 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72897' 00:14:43.241 Received shutdown signal, test time was about 1.000000 seconds 00:14:43.241 00:14:43.241 Latency(us) 00:14:43.241 [2024-10-08T09:20:34.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.241 [2024-10-08T09:20:34.924Z] =================================================================================================================== 00:14:43.241 [2024-10-08T09:20:34.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.241 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72897 00:14:43.241 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72897 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72878 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72878 ']' 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72878 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72878 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72878' 00:14:43.500 killing process with pid 72878 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72878 00:14:43.500 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72878 00:14:43.760 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:43.760 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:43.760 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:43.760 "subsystems": [ 00:14:43.760 { 00:14:43.760 "subsystem": "keyring", 00:14:43.760 "config": [ 00:14:43.760 { 00:14:43.760 "method": "keyring_file_add_key", 00:14:43.760 "params": { 00:14:43.760 "name": "key0", 00:14:43.760 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:43.760 } 00:14:43.760 } 00:14:43.760 ] 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "subsystem": "iobuf", 00:14:43.760 "config": [ 00:14:43.760 { 00:14:43.760 "method": "iobuf_set_options", 00:14:43.760 "params": { 00:14:43.760 "small_pool_count": 8192, 00:14:43.760 "large_pool_count": 1024, 00:14:43.760 "small_bufsize": 8192, 00:14:43.760 "large_bufsize": 135168 00:14:43.760 } 00:14:43.760 } 00:14:43.760 ] 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "subsystem": "sock", 00:14:43.760 "config": [ 00:14:43.760 { 00:14:43.760 "method": "sock_set_default_impl", 00:14:43.760 "params": { 00:14:43.760 "impl_name": "uring" 00:14:43.760 } 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "method": "sock_impl_set_options", 00:14:43.760 "params": { 00:14:43.760 "impl_name": "ssl", 00:14:43.760 "recv_buf_size": 4096, 00:14:43.760 "send_buf_size": 4096, 00:14:43.760 "enable_recv_pipe": true, 00:14:43.760 "enable_quickack": false, 00:14:43.760 "enable_placement_id": 0, 00:14:43.760 "enable_zerocopy_send_server": true, 00:14:43.760 "enable_zerocopy_send_client": false, 00:14:43.760 "zerocopy_threshold": 0, 00:14:43.760 "tls_version": 0, 00:14:43.760 "enable_ktls": false 00:14:43.760 } 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "method": "sock_impl_set_options", 00:14:43.760 "params": { 00:14:43.760 "impl_name": "posix", 00:14:43.760 "recv_buf_size": 2097152, 00:14:43.760 "send_buf_size": 2097152, 00:14:43.760 "enable_recv_pipe": true, 00:14:43.760 "enable_quickack": false, 00:14:43.760 "enable_placement_id": 0, 00:14:43.760 "enable_zerocopy_send_server": true, 00:14:43.760 "enable_zerocopy_send_client": false, 00:14:43.760 "zerocopy_threshold": 0, 00:14:43.760 "tls_version": 0, 00:14:43.760 "enable_ktls": false 00:14:43.760 } 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "method": "sock_impl_set_options", 00:14:43.760 "params": { 00:14:43.760 "impl_name": "uring", 00:14:43.760 "recv_buf_size": 2097152, 00:14:43.760 "send_buf_size": 2097152, 00:14:43.760 "enable_recv_pipe": true, 00:14:43.760 "enable_quickack": false, 00:14:43.760 "enable_placement_id": 0, 00:14:43.760 "enable_zerocopy_send_server": false, 00:14:43.760 "enable_zerocopy_send_client": false, 00:14:43.760 "zerocopy_threshold": 0, 00:14:43.760 "tls_version": 0, 00:14:43.760 "enable_ktls": false 00:14:43.760 } 00:14:43.760 } 00:14:43.760 ] 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "subsystem": "vmd", 00:14:43.760 "config": [] 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "subsystem": "accel", 00:14:43.760 "config": [ 00:14:43.760 { 00:14:43.760 "method": "accel_set_options", 00:14:43.760 "params": { 00:14:43.760 "small_cache_size": 128, 00:14:43.760 "large_cache_size": 16, 00:14:43.760 "task_count": 2048, 00:14:43.760 "sequence_count": 2048, 00:14:43.760 "buf_count": 2048 00:14:43.760 } 00:14:43.760 } 00:14:43.760 ] 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "subsystem": "bdev", 00:14:43.760 "config": [ 00:14:43.760 { 00:14:43.760 "method": "bdev_set_options", 00:14:43.760 "params": { 00:14:43.760 "bdev_io_pool_size": 65535, 00:14:43.760 "bdev_io_cache_size": 256, 00:14:43.760 "bdev_auto_examine": true, 00:14:43.760 "iobuf_small_cache_size": 128, 00:14:43.760 "iobuf_large_cache_size": 16 00:14:43.760 } 00:14:43.760 }, 00:14:43.760 { 00:14:43.760 "method": "bdev_raid_set_options", 00:14:43.760 "params": { 00:14:43.761 "process_window_size_kb": 1024, 00:14:43.761 "process_max_bandwidth_mb_sec": 0 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "bdev_iscsi_set_options", 00:14:43.761 "params": { 00:14:43.761 "timeout_sec": 30 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "bdev_nvme_set_options", 00:14:43.761 "params": { 00:14:43.761 "action_on_timeout": "none", 00:14:43.761 "timeout_us": 0, 00:14:43.761 "timeout_admin_us": 0, 00:14:43.761 "keep_alive_timeout_ms": 10000, 00:14:43.761 "arbitration_burst": 0, 00:14:43.761 "low_priority_weight": 0, 00:14:43.761 "medium_priority_weight": 0, 00:14:43.761 "high_priority_weight": 0, 00:14:43.761 "nvme_adminq_poll_period_us": 10000, 00:14:43.761 "nvme_ioq_poll_period_us": 0, 00:14:43.761 "io_queue_requests": 0, 00:14:43.761 "delay_cmd_submit": true, 00:14:43.761 "transport_retry_count": 4, 00:14:43.761 "bdev_retry_count": 3, 00:14:43.761 "transport_ack_timeout": 0, 00:14:43.761 "ctrlr_loss_timeout_sec": 0, 00:14:43.761 "reconnect_delay_sec": 0, 00:14:43.761 "fast_io_fail_timeout_sec": 0, 00:14:43.761 "disable_auto_failback": false, 00:14:43.761 "generate_uuids": false, 00:14:43.761 "transport_tos": 0, 00:14:43.761 "nvme_error_stat": false, 00:14:43.761 "rdma_srq_size": 0, 00:14:43.761 "io_path_stat": false, 00:14:43.761 "allow_accel_sequence": false, 00:14:43.761 "rdma_max_cq_size": 0, 00:14:43.761 "rdma_cm_event_timeout_ms": 0, 00:14:43.761 "dhchap_digests": [ 00:14:43.761 "sha256", 00:14:43.761 "sha384", 00:14:43.761 "sha512" 00:14:43.761 ], 00:14:43.761 "dhchap_dhgroups": [ 00:14:43.761 "null", 00:14:43.761 "ffdhe2048", 00:14:43.761 "ffdhe3072", 00:14:43.761 "ffdhe4096", 00:14:43.761 "ffdhe6144", 00:14:43.761 "ffdhe8192" 00:14:43.761 ] 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "bdev_nvme_set_hotplug", 00:14:43.761 "params": { 00:14:43.761 "period_us": 100000, 00:14:43.761 "enable": false 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "bdev_malloc_create", 00:14:43.761 "params": { 00:14:43.761 "name": "malloc0", 00:14:43.761 "num_blocks": 8192, 00:14:43.761 "block_size": 4096, 00:14:43.761 "physical_block_size": 4096, 00:14:43.761 "uuid": "4e8e1d05-a1ba-4401-b320-d133efef80cd", 00:14:43.761 "optimal_io_boundary": 0, 00:14:43.761 "md_size": 0, 00:14:43.761 "dif_type": 0, 00:14:43.761 "dif_is_head_of_md": false, 00:14:43.761 "dif_pi_format": 0 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "bdev_wait_for_examine" 00:14:43.761 } 00:14:43.761 ] 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "subsystem": "nbd", 00:14:43.761 "config": [] 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "subsystem": "scheduler", 00:14:43.761 "config": [ 00:14:43.761 { 00:14:43.761 "method": "framework_set_scheduler", 00:14:43.761 "params": { 00:14:43.761 "name": "static" 00:14:43.761 } 00:14:43.761 } 00:14:43.761 ] 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "subsystem": "nvmf", 00:14:43.761 "config": [ 00:14:43.761 { 00:14:43.761 "method": "nvmf_set_config", 00:14:43.761 "params": { 00:14:43.761 "discovery_filter": "match_any", 00:14:43.761 "admin_cmd_passthru": { 00:14:43.761 "identify_ctrlr": false 00:14:43.761 }, 00:14:43.761 "dhchap_digests": [ 00:14:43.761 "sha256", 00:14:43.761 "sha384", 00:14:43.761 "sha512" 00:14:43.761 ], 00:14:43.761 "dhchap_dhgroups": [ 00:14:43.761 "null", 00:14:43.761 "ffdhe2048", 00:14:43.761 "ffdhe3072", 00:14:43.761 "ffdhe4096", 00:14:43.761 "ffdhe6144", 00:14:43.761 "ffdhe8192" 00:14:43.761 ] 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "nvmf_set_max_subsystems", 00:14:43.761 "params": { 00:14:43.761 "max_subsystems": 1024 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "nvmf_set_crdt", 00:14:43.761 "params": { 00:14:43.761 "crdt1": 0, 00:14:43.761 "crdt2": 0, 00:14:43.761 "crdt3": 0 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "nvmf_create_transport", 00:14:43.761 "params": { 00:14:43.761 "trtype": "TCP", 00:14:43.761 "max_queue_depth": 128, 00:14:43.761 "max_io_qpairs_per_ctrlr": 127, 00:14:43.761 "in_capsule_data_size": 4096, 00:14:43.761 "max_io_size": 131072, 00:14:43.761 "io_unit_size": 131072, 00:14:43.761 "max_aq_depth": 128, 00:14:43.761 "num_shared_buffers": 511, 00:14:43.761 "buf_cache_size": 4294967295, 00:14:43.761 "dif_insert_or_strip": false, 00:14:43.761 "zcopy": false, 00:14:43.761 "c2h_success": false, 00:14:43.761 "sock_priority": 0, 00:14:43.761 "abort_timeout_sec": 1, 00:14:43.761 "ack_timeout": 0, 00:14:43.761 "data_wr_pool_size": 0 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "nvmf_create_subsystem", 00:14:43.761 "params": { 00:14:43.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.761 "allow_any_host": false, 00:14:43.761 "serial_number": "00000000000000000000", 00:14:43.761 "model_number": "SPDK bdev Controller", 00:14:43.761 "max_namespaces": 32, 00:14:43.761 "min_cntlid": 1, 00:14:43.761 "max_cntlid": 65519, 00:14:43.761 "ana_reporting": false 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "nvmf_subsystem_add_host", 00:14:43.761 "params": { 00:14:43.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.761 "host": "nqn.2016-06.io.spdk:host1", 00:14:43.761 "psk": "key0" 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "nvmf_subsystem_add_ns", 00:14:43.761 "params": { 00:14:43.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.761 "namespace": { 00:14:43.761 "nsid": 1, 00:14:43.761 "bdev_name": "malloc0", 00:14:43.761 "nguid": "4E8E1D05A1BA4401B320D133EFEF80CD", 00:14:43.761 "uuid": "4e8e1d05-a1ba-4401-b320-d133efef80cd", 00:14:43.761 "no_auto_visible": false 00:14:43.761 } 00:14:43.761 } 00:14:43.761 }, 00:14:43.761 { 00:14:43.761 "method": "nvmf_subsystem_add_listener", 00:14:43.761 "params": { 00:14:43.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.761 "listen_address": { 00:14:43.761 "trtype": "TCP", 00:14:43.761 "adrfam": "IPv4", 00:14:43.761 "traddr": "10.0.0.3", 00:14:43.761 "trsvcid": "4420" 00:14:43.761 }, 00:14:43.761 "secure_channel": false, 00:14:43.761 "sock_impl": "ssl" 00:14:43.761 } 00:14:43.761 } 00:14:43.761 ] 00:14:43.761 } 00:14:43.761 ] 00:14:43.761 }' 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72963 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72963 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72963 ']' 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.761 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.761 [2024-10-08 09:20:35.272727] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:43.761 [2024-10-08 09:20:35.272867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.761 [2024-10-08 09:20:35.408460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.020 [2024-10-08 09:20:35.494680] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.020 [2024-10-08 09:20:35.494763] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.020 [2024-10-08 09:20:35.494791] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.020 [2024-10-08 09:20:35.494806] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.020 [2024-10-08 09:20:35.494813] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.020 [2024-10-08 09:20:35.495264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.020 [2024-10-08 09:20:35.662843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.279 [2024-10-08 09:20:35.741036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.279 [2024-10-08 09:20:35.779019] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.279 [2024-10-08 09:20:35.779269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72995 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72995 /var/tmp/bdevperf.sock 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72995 ']' 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:44.847 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:44.847 "subsystems": [ 00:14:44.847 { 00:14:44.847 "subsystem": "keyring", 00:14:44.847 "config": [ 00:14:44.847 { 00:14:44.847 "method": "keyring_file_add_key", 00:14:44.847 "params": { 00:14:44.847 "name": "key0", 00:14:44.847 "path": "/tmp/tmp.cV4vmCYsGW" 00:14:44.847 } 00:14:44.847 } 00:14:44.847 ] 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "subsystem": "iobuf", 00:14:44.847 "config": [ 00:14:44.847 { 00:14:44.847 "method": "iobuf_set_options", 00:14:44.847 "params": { 00:14:44.847 "small_pool_count": 8192, 00:14:44.847 "large_pool_count": 1024, 00:14:44.847 "small_bufsize": 8192, 00:14:44.847 "large_bufsize": 135168 00:14:44.847 } 00:14:44.847 } 00:14:44.847 ] 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "subsystem": "sock", 00:14:44.847 "config": [ 00:14:44.847 { 00:14:44.847 "method": "sock_set_default_impl", 00:14:44.847 "params": { 00:14:44.847 "impl_name": "uring" 00:14:44.847 } 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "method": "sock_impl_set_options", 00:14:44.847 "params": { 00:14:44.847 "impl_name": "ssl", 00:14:44.847 "recv_buf_size": 4096, 00:14:44.847 "send_buf_size": 4096, 00:14:44.847 "enable_recv_pipe": true, 00:14:44.847 "enable_quickack": false, 00:14:44.847 "enable_placement_id": 0, 00:14:44.847 "enable_zerocopy_send_server": true, 00:14:44.847 "enable_zerocopy_send_client": false, 00:14:44.847 "zerocopy_threshold": 0, 00:14:44.847 "tls_version": 0, 00:14:44.847 "enable_ktls": false 00:14:44.847 } 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "method": "sock_impl_set_options", 00:14:44.847 "params": { 00:14:44.847 "impl_name": "posix", 00:14:44.847 "recv_buf_size": 2097152, 00:14:44.847 "send_buf_size": 2097152, 00:14:44.847 "enable_recv_pipe": true, 00:14:44.847 "enable_quickack": false, 00:14:44.847 "enable_placement_id": 0, 00:14:44.847 "enable_zerocopy_send_server": true, 00:14:44.847 "enable_zerocopy_send_client": false, 00:14:44.847 "zerocopy_threshold": 0, 00:14:44.847 "tls_version": 0, 00:14:44.847 "enable_ktls": false 00:14:44.847 } 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "method": "sock_impl_set_options", 00:14:44.847 "params": { 00:14:44.847 "impl_name": "uring", 00:14:44.847 "recv_buf_size": 2097152, 00:14:44.847 "send_buf_size": 2097152, 00:14:44.847 "enable_recv_pipe": true, 00:14:44.847 "enable_quickack": false, 00:14:44.847 "enable_placement_id": 0, 00:14:44.847 "enable_zerocopy_send_server": false, 00:14:44.847 "enable_zerocopy_send_client": false, 00:14:44.847 "zerocopy_threshold": 0, 00:14:44.847 "tls_version": 0, 00:14:44.847 "enable_ktls": false 00:14:44.847 } 00:14:44.847 } 00:14:44.847 ] 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "subsystem": "vmd", 00:14:44.847 "config": [] 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "subsystem": "accel", 00:14:44.847 "config": [ 00:14:44.847 { 00:14:44.847 "method": "accel_set_options", 00:14:44.847 "params": { 00:14:44.847 "small_cache_size": 128, 00:14:44.847 "large_cache_size": 16, 00:14:44.847 "task_count": 2048, 00:14:44.847 "sequence_count": 2048, 00:14:44.847 "buf_count": 2048 00:14:44.847 } 00:14:44.847 } 00:14:44.847 ] 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "subsystem": "bdev", 00:14:44.847 "config": [ 00:14:44.847 { 00:14:44.847 "method": "bdev_set_options", 00:14:44.847 "params": { 00:14:44.847 "bdev_io_pool_size": 65535, 00:14:44.847 "bdev_io_cache_size": 256, 00:14:44.847 "bdev_auto_examine": true, 00:14:44.847 "iobuf_small_cache_size": 128, 00:14:44.847 "iobuf_large_cache_size": 16 00:14:44.847 } 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "method": "bdev_raid_set_options", 00:14:44.847 "params": { 00:14:44.847 "process_window_size_kb": 1024, 00:14:44.847 "process_max_bandwidth_mb_sec": 0 00:14:44.847 } 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "method": "bdev_iscsi_set_options", 00:14:44.847 "params": { 00:14:44.847 "timeout_sec": 30 00:14:44.847 } 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "method": "bdev_nvme_set_options", 00:14:44.847 "params": { 00:14:44.847 "action_on_timeout": "none", 00:14:44.847 "timeout_us": 0, 00:14:44.847 "timeout_admin_us": 0, 00:14:44.847 "keep_alive_timeout_ms": 10000, 00:14:44.847 "arbitration_burst": 0, 00:14:44.847 "low_priority_weight": 0, 00:14:44.847 "medium_priority_weight": 0, 00:14:44.847 "high_priority_weight": 0, 00:14:44.847 "nvme_adminq_poll_period_us": 10000, 00:14:44.847 "nvme_ioq_poll_period_us": 0, 00:14:44.847 "io_queue_requests": 512, 00:14:44.847 "delay_cmd_submit": true, 00:14:44.847 "transport_retry_count": 4, 00:14:44.847 "bdev_retry_count": 3, 00:14:44.847 "transport_ack_timeout": 0, 00:14:44.847 "ctrlr_loss_timeout_sec": 0, 00:14:44.847 "reconnect_delay_sec": 0, 00:14:44.847 "fast_io_fail_timeout_sec": 0, 00:14:44.848 "disable_auto_failback": false, 00:14:44.848 "generate_uuids": false, 00:14:44.848 "transport_tos": 0, 00:14:44.848 "nvme_error_stat": false, 00:14:44.848 "rdma_srq_size": 0, 00:14:44.848 "io_path_stat": false, 00:14:44.848 "allow_accel_sequence": false, 00:14:44.848 "rdma_max_cq_size": 0, 00:14:44.848 "rdma_cm_event_timeout_ms": 0, 00:14:44.848 "dhchap_digests": [ 00:14:44.848 "sha256", 00:14:44.848 "sha384", 00:14:44.848 "sha512" 00:14:44.848 ], 00:14:44.848 "dhchap_dhgroups": [ 00:14:44.848 "null", 00:14:44.848 "ffdhe2048", 00:14:44.848 "ffdhe3072", 00:14:44.848 "ffdhe4096", 00:14:44.848 "ffdhe6144", 00:14:44.848 "ffdhe8192" 00:14:44.848 ] 00:14:44.848 } 00:14:44.848 }, 00:14:44.848 { 00:14:44.848 "method": "bdev_nvme_attach_controller", 00:14:44.848 "params": { 00:14:44.848 "name": "nvme0", 00:14:44.848 "trtype": "TCP", 00:14:44.848 "adrfam": "IPv4", 00:14:44.848 "traddr": "10.0.0.3", 00:14:44.848 "trsvcid": "4420", 00:14:44.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.848 "prchk_reftag": false, 00:14:44.848 "prchk_guard": false, 00:14:44.848 "ctrlr_loss_timeout_sec": 0, 00:14:44.848 "reconnect_delay_sec": 0, 00:14:44.848 "fast_io_fail_timeout_sec": 0, 00:14:44.848 "psk": "key0", 00:14:44.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.848 "hdgst": false, 00:14:44.848 "ddgst": false, 00:14:44.848 "multipath": "multipath" 00:14:44.848 } 00:14:44.848 }, 00:14:44.848 { 00:14:44.848 "method": "bdev_nvme_set_hotplug", 00:14:44.848 "params": { 00:14:44.848 "period_us": 100000, 00:14:44.848 "enable": false 00:14:44.848 } 00:14:44.848 }, 00:14:44.848 { 00:14:44.848 "method": "bdev_enable_histogram", 00:14:44.848 "params": { 00:14:44.848 "name": "nvme0n1", 00:14:44.848 "enable": true 00:14:44.848 } 00:14:44.848 }, 00:14:44.848 { 00:14:44.848 "method": "bdev_wait_for_examine" 00:14:44.848 } 00:14:44.848 ] 00:14:44.848 }, 00:14:44.848 { 00:14:44.848 "subsystem": "nbd", 00:14:44.848 "config": [] 00:14:44.848 } 00:14:44.848 ] 00:14:44.848 }' 00:14:44.848 [2024-10-08 09:20:36.409957] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:44.848 [2024-10-08 09:20:36.410059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:14:45.107 [2024-10-08 09:20:36.546215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.107 [2024-10-08 09:20:36.656550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.366 [2024-10-08 09:20:36.793892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.366 [2024-10-08 09:20:36.843004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.966 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.967 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:45.967 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:45.967 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:46.225 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.225 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.225 Running I/O for 1 seconds... 00:14:47.163 4278.00 IOPS, 16.71 MiB/s 00:14:47.163 Latency(us) 00:14:47.163 [2024-10-08T09:20:38.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.163 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.163 Verification LBA range: start 0x0 length 0x2000 00:14:47.163 nvme0n1 : 1.02 4315.66 16.86 0.00 0.00 29276.58 5510.98 18707.55 00:14:47.163 [2024-10-08T09:20:38.846Z] =================================================================================================================== 00:14:47.163 [2024-10-08T09:20:38.846Z] Total : 4315.66 16.86 0.00 0.00 29276.58 5510.98 18707.55 00:14:47.163 { 00:14:47.163 "results": [ 00:14:47.163 { 00:14:47.163 "job": "nvme0n1", 00:14:47.163 "core_mask": "0x2", 00:14:47.163 "workload": "verify", 00:14:47.163 "status": "finished", 00:14:47.163 "verify_range": { 00:14:47.163 "start": 0, 00:14:47.163 "length": 8192 00:14:47.163 }, 00:14:47.163 "queue_depth": 128, 00:14:47.163 "io_size": 4096, 00:14:47.163 "runtime": 1.020932, 00:14:47.163 "iops": 4315.664510466907, 00:14:47.163 "mibps": 16.858064494011355, 00:14:47.163 "io_failed": 0, 00:14:47.163 "io_timeout": 0, 00:14:47.163 "avg_latency_us": 29276.581296579046, 00:14:47.163 "min_latency_us": 5510.981818181818, 00:14:47.163 "max_latency_us": 18707.54909090909 00:14:47.163 } 00:14:47.163 ], 00:14:47.163 "core_count": 1 00:14:47.163 } 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:47.163 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:47.423 nvmf_trace.0 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72995 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72995 ']' 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72995 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72995 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72995' 00:14:47.423 killing process with pid 72995 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72995 00:14:47.423 Received shutdown signal, test time was about 1.000000 seconds 00:14:47.423 00:14:47.423 Latency(us) 00:14:47.423 [2024-10-08T09:20:39.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.423 [2024-10-08T09:20:39.106Z] =================================================================================================================== 00:14:47.423 [2024-10-08T09:20:39.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.423 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72995 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.682 rmmod nvme_tcp 00:14:47.682 rmmod nvme_fabrics 00:14:47.682 rmmod nvme_keyring 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 72963 ']' 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 72963 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72963 ']' 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72963 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72963 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:47.682 killing process with pid 72963 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72963' 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72963 00:14:47.682 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72963 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:47.941 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ENSrzk2KNc /tmp/tmp.P6MqKGAApN /tmp/tmp.cV4vmCYsGW 00:14:48.200 00:14:48.200 real 1m28.957s 00:14:48.200 user 2m24.859s 00:14:48.200 sys 0m27.848s 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.200 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.200 ************************************ 00:14:48.200 END TEST nvmf_tls 00:14:48.200 ************************************ 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.460 ************************************ 00:14:48.460 START TEST nvmf_fips 00:14:48.460 ************************************ 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:48.460 * Looking for test storage... 00:14:48.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:14:48.460 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:48.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.460 --rc genhtml_branch_coverage=1 00:14:48.460 --rc genhtml_function_coverage=1 00:14:48.460 --rc genhtml_legend=1 00:14:48.460 --rc geninfo_all_blocks=1 00:14:48.460 --rc geninfo_unexecuted_blocks=1 00:14:48.460 00:14:48.460 ' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:48.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.460 --rc genhtml_branch_coverage=1 00:14:48.460 --rc genhtml_function_coverage=1 00:14:48.460 --rc genhtml_legend=1 00:14:48.460 --rc geninfo_all_blocks=1 00:14:48.460 --rc geninfo_unexecuted_blocks=1 00:14:48.460 00:14:48.460 ' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:48.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.460 --rc genhtml_branch_coverage=1 00:14:48.460 --rc genhtml_function_coverage=1 00:14:48.460 --rc genhtml_legend=1 00:14:48.460 --rc geninfo_all_blocks=1 00:14:48.460 --rc geninfo_unexecuted_blocks=1 00:14:48.460 00:14:48.460 ' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:48.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.460 --rc genhtml_branch_coverage=1 00:14:48.460 --rc genhtml_function_coverage=1 00:14:48.460 --rc genhtml_legend=1 00:14:48.460 --rc geninfo_all_blocks=1 00:14:48.460 --rc geninfo_unexecuted_blocks=1 00:14:48.460 00:14:48.460 ' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.460 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.461 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.461 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:48.720 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:48.721 Error setting digest 00:14:48.721 4012F363207F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:48.721 4012F363207F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:48.721 Cannot find device "nvmf_init_br" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:48.721 Cannot find device "nvmf_init_br2" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:48.721 Cannot find device "nvmf_tgt_br" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.721 Cannot find device "nvmf_tgt_br2" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:48.721 Cannot find device "nvmf_init_br" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:48.721 Cannot find device "nvmf_init_br2" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:48.721 Cannot find device "nvmf_tgt_br" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:48.721 Cannot find device "nvmf_tgt_br2" 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:48.721 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:48.980 Cannot find device "nvmf_br" 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:48.980 Cannot find device "nvmf_init_if" 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:48.980 Cannot find device "nvmf_init_if2" 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.980 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:49.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:49.239 00:14:49.239 --- 10.0.0.3 ping statistics --- 00:14:49.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.239 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:49.239 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:49.239 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:14:49.239 00:14:49.239 --- 10.0.0.4 ping statistics --- 00:14:49.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.239 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:49.239 00:14:49.239 --- 10.0.0.1 ping statistics --- 00:14:49.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.239 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:49.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:49.239 00:14:49.239 --- 10.0.0.2 ping statistics --- 00:14:49.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.239 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # return 0 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=73315 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 73315 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73315 ']' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.239 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:49.239 [2024-10-08 09:20:40.812513] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:49.239 [2024-10-08 09:20:40.812618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.498 [2024-10-08 09:20:40.950525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.498 [2024-10-08 09:20:41.058462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.498 [2024-10-08 09:20:41.058536] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.498 [2024-10-08 09:20:41.058551] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.498 [2024-10-08 09:20:41.058561] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.498 [2024-10-08 09:20:41.058571] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.498 [2024-10-08 09:20:41.059088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.499 [2024-10-08 09:20:41.118384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.DLg 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.DLg 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.DLg 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.DLg 00:14:50.435 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.694 [2024-10-08 09:20:42.216635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.694 [2024-10-08 09:20:42.232581] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:50.694 [2024-10-08 09:20:42.232827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.694 malloc0 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73361 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73361 /var/tmp/bdevperf.sock 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73361 ']' 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.694 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 [2024-10-08 09:20:42.376081] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:50.694 [2024-10-08 09:20:42.376186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73361 ] 00:14:50.953 [2024-10-08 09:20:42.513444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.953 [2024-10-08 09:20:42.618791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.212 [2024-10-08 09:20:42.677367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.779 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.779 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:51.779 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.DLg 00:14:52.038 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.296 [2024-10-08 09:20:43.890721] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.296 TLSTESTn1 00:14:52.555 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:52.555 Running I/O for 10 seconds... 00:14:54.427 4243.00 IOPS, 16.57 MiB/s [2024-10-08T09:20:47.487Z] 4375.50 IOPS, 17.09 MiB/s [2024-10-08T09:20:48.423Z] 4387.33 IOPS, 17.14 MiB/s [2024-10-08T09:20:49.359Z] 4426.75 IOPS, 17.29 MiB/s [2024-10-08T09:20:50.295Z] 4430.40 IOPS, 17.31 MiB/s [2024-10-08T09:20:51.231Z] 4461.50 IOPS, 17.43 MiB/s [2024-10-08T09:20:52.167Z] 4470.29 IOPS, 17.46 MiB/s [2024-10-08T09:20:53.104Z] 4477.00 IOPS, 17.49 MiB/s [2024-10-08T09:20:54.481Z] 4472.11 IOPS, 17.47 MiB/s [2024-10-08T09:20:54.481Z] 4479.90 IOPS, 17.50 MiB/s 00:15:02.798 Latency(us) 00:15:02.798 [2024-10-08T09:20:54.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.798 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:02.798 Verification LBA range: start 0x0 length 0x2000 00:15:02.798 TLSTESTn1 : 10.02 4485.69 17.52 0.00 0.00 28484.56 5451.40 23473.80 00:15:02.798 [2024-10-08T09:20:54.481Z] =================================================================================================================== 00:15:02.798 [2024-10-08T09:20:54.481Z] Total : 4485.69 17.52 0.00 0.00 28484.56 5451.40 23473.80 00:15:02.798 { 00:15:02.798 "results": [ 00:15:02.798 { 00:15:02.798 "job": "TLSTESTn1", 00:15:02.798 "core_mask": "0x4", 00:15:02.798 "workload": "verify", 00:15:02.798 "status": "finished", 00:15:02.798 "verify_range": { 00:15:02.798 "start": 0, 00:15:02.798 "length": 8192 00:15:02.798 }, 00:15:02.798 "queue_depth": 128, 00:15:02.798 "io_size": 4096, 00:15:02.798 "runtime": 10.015192, 00:15:02.798 "iops": 4485.685346821109, 00:15:02.798 "mibps": 17.52220838601996, 00:15:02.798 "io_failed": 0, 00:15:02.798 "io_timeout": 0, 00:15:02.798 "avg_latency_us": 28484.55914928922, 00:15:02.798 "min_latency_us": 5451.403636363636, 00:15:02.798 "max_latency_us": 23473.803636363635 00:15:02.798 } 00:15:02.798 ], 00:15:02.798 "core_count": 1 00:15:02.798 } 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:02.798 nvmf_trace.0 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73361 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73361 ']' 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73361 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73361 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:02.798 killing process with pid 73361 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73361' 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73361 00:15:02.798 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.798 00:15:02.798 Latency(us) 00:15:02.798 [2024-10-08T09:20:54.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.798 [2024-10-08T09:20:54.481Z] =================================================================================================================== 00:15:02.798 [2024-10-08T09:20:54.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.798 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73361 00:15:03.057 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:03.057 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:03.057 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:03.058 rmmod nvme_tcp 00:15:03.058 rmmod nvme_fabrics 00:15:03.058 rmmod nvme_keyring 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 73315 ']' 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 73315 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73315 ']' 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73315 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73315 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73315' 00:15:03.058 killing process with pid 73315 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73315 00:15:03.058 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73315 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:03.317 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.DLg 00:15:03.575 00:15:03.575 real 0m15.204s 00:15:03.575 user 0m21.206s 00:15:03.575 sys 0m5.835s 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:03.575 ************************************ 00:15:03.575 END TEST nvmf_fips 00:15:03.575 ************************************ 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:03.575 ************************************ 00:15:03.575 START TEST nvmf_control_msg_list 00:15:03.575 ************************************ 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:03.575 * Looking for test storage... 00:15:03.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:03.575 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:03.834 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.835 --rc genhtml_branch_coverage=1 00:15:03.835 --rc genhtml_function_coverage=1 00:15:03.835 --rc genhtml_legend=1 00:15:03.835 --rc geninfo_all_blocks=1 00:15:03.835 --rc geninfo_unexecuted_blocks=1 00:15:03.835 00:15:03.835 ' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.835 --rc genhtml_branch_coverage=1 00:15:03.835 --rc genhtml_function_coverage=1 00:15:03.835 --rc genhtml_legend=1 00:15:03.835 --rc geninfo_all_blocks=1 00:15:03.835 --rc geninfo_unexecuted_blocks=1 00:15:03.835 00:15:03.835 ' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.835 --rc genhtml_branch_coverage=1 00:15:03.835 --rc genhtml_function_coverage=1 00:15:03.835 --rc genhtml_legend=1 00:15:03.835 --rc geninfo_all_blocks=1 00:15:03.835 --rc geninfo_unexecuted_blocks=1 00:15:03.835 00:15:03.835 ' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.835 --rc genhtml_branch_coverage=1 00:15:03.835 --rc genhtml_function_coverage=1 00:15:03.835 --rc genhtml_legend=1 00:15:03.835 --rc geninfo_all_blocks=1 00:15:03.835 --rc geninfo_unexecuted_blocks=1 00:15:03.835 00:15:03.835 ' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:03.835 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:03.836 Cannot find device "nvmf_init_br" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:03.836 Cannot find device "nvmf_init_br2" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:03.836 Cannot find device "nvmf_tgt_br" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.836 Cannot find device "nvmf_tgt_br2" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:03.836 Cannot find device "nvmf_init_br" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:03.836 Cannot find device "nvmf_init_br2" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:03.836 Cannot find device "nvmf_tgt_br" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:03.836 Cannot find device "nvmf_tgt_br2" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:03.836 Cannot find device "nvmf_br" 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:03.836 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:04.095 Cannot find device "nvmf_init_if" 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:04.095 Cannot find device "nvmf_init_if2" 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:04.095 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:04.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:04.096 00:15:04.096 --- 10.0.0.3 ping statistics --- 00:15:04.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.096 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:04.096 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:04.096 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:04.096 00:15:04.096 --- 10.0.0.4 ping statistics --- 00:15:04.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.096 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:04.096 00:15:04.096 --- 10.0.0.1 ping statistics --- 00:15:04.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.096 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:04.096 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:04.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:15:04.355 00:15:04.355 --- 10.0.0.2 ping statistics --- 00:15:04.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.355 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # return 0 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=73747 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 73747 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73747 ']' 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.355 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:04.355 [2024-10-08 09:20:55.863779] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:04.355 [2024-10-08 09:20:55.863866] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.355 [2024-10-08 09:20:56.005891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.614 [2024-10-08 09:20:56.114810] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.614 [2024-10-08 09:20:56.114865] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.614 [2024-10-08 09:20:56.114879] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.614 [2024-10-08 09:20:56.114890] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.614 [2024-10-08 09:20:56.114899] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.614 [2024-10-08 09:20:56.115345] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.614 [2024-10-08 09:20:56.171699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:05.551 [2024-10-08 09:20:56.928397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:05.551 Malloc0 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:05.551 [2024-10-08 09:20:56.977616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73779 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73780 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73781 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73779 00:15:05.551 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:05.551 [2024-10-08 09:20:57.145971] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:05.551 [2024-10-08 09:20:57.156314] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:05.551 [2024-10-08 09:20:57.156604] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:06.486 Initializing NVMe Controllers 00:15:06.486 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:06.486 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:06.486 Initialization complete. Launching workers. 00:15:06.486 ======================================================== 00:15:06.486 Latency(us) 00:15:06.486 Device Information : IOPS MiB/s Average min max 00:15:06.486 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3591.00 14.03 278.13 127.56 587.80 00:15:06.486 ======================================================== 00:15:06.486 Total : 3591.00 14.03 278.13 127.56 587.80 00:15:06.486 00:15:06.745 Initializing NVMe Controllers 00:15:06.745 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:06.745 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:06.745 Initialization complete. Launching workers. 00:15:06.745 ======================================================== 00:15:06.745 Latency(us) 00:15:06.745 Device Information : IOPS MiB/s Average min max 00:15:06.746 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3586.96 14.01 278.45 170.96 451.45 00:15:06.746 ======================================================== 00:15:06.746 Total : 3586.96 14.01 278.45 170.96 451.45 00:15:06.746 00:15:06.746 Initializing NVMe Controllers 00:15:06.746 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:06.746 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:06.746 Initialization complete. Launching workers. 00:15:06.746 ======================================================== 00:15:06.746 Latency(us) 00:15:06.746 Device Information : IOPS MiB/s Average min max 00:15:06.746 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3567.00 13.93 279.95 178.26 673.54 00:15:06.746 ======================================================== 00:15:06.746 Total : 3567.00 13.93 279.95 178.26 673.54 00:15:06.746 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73780 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73781 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:06.746 rmmod nvme_tcp 00:15:06.746 rmmod nvme_fabrics 00:15:06.746 rmmod nvme_keyring 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 73747 ']' 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 73747 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73747 ']' 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73747 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73747 00:15:06.746 killing process with pid 73747 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73747' 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73747 00:15:06.746 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73747 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:07.004 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:07.263 00:15:07.263 real 0m3.634s 00:15:07.263 user 0m5.676s 00:15:07.263 sys 0m1.337s 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:07.263 ************************************ 00:15:07.263 END TEST nvmf_control_msg_list 00:15:07.263 ************************************ 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.263 ************************************ 00:15:07.263 START TEST nvmf_wait_for_buf 00:15:07.263 ************************************ 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:07.263 * Looking for test storage... 00:15:07.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:07.263 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.523 --rc genhtml_branch_coverage=1 00:15:07.523 --rc genhtml_function_coverage=1 00:15:07.523 --rc genhtml_legend=1 00:15:07.523 --rc geninfo_all_blocks=1 00:15:07.523 --rc geninfo_unexecuted_blocks=1 00:15:07.523 00:15:07.523 ' 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.523 --rc genhtml_branch_coverage=1 00:15:07.523 --rc genhtml_function_coverage=1 00:15:07.523 --rc genhtml_legend=1 00:15:07.523 --rc geninfo_all_blocks=1 00:15:07.523 --rc geninfo_unexecuted_blocks=1 00:15:07.523 00:15:07.523 ' 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.523 --rc genhtml_branch_coverage=1 00:15:07.523 --rc genhtml_function_coverage=1 00:15:07.523 --rc genhtml_legend=1 00:15:07.523 --rc geninfo_all_blocks=1 00:15:07.523 --rc geninfo_unexecuted_blocks=1 00:15:07.523 00:15:07.523 ' 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.523 --rc genhtml_branch_coverage=1 00:15:07.523 --rc genhtml_function_coverage=1 00:15:07.523 --rc genhtml_legend=1 00:15:07.523 --rc geninfo_all_blocks=1 00:15:07.523 --rc geninfo_unexecuted_blocks=1 00:15:07.523 00:15:07.523 ' 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.523 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:07.524 Cannot find device "nvmf_init_br" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:07.524 Cannot find device "nvmf_init_br2" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:07.524 Cannot find device "nvmf_tgt_br" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.524 Cannot find device "nvmf_tgt_br2" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:07.524 Cannot find device "nvmf_init_br" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:07.524 Cannot find device "nvmf_init_br2" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:07.524 Cannot find device "nvmf_tgt_br" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:07.524 Cannot find device "nvmf_tgt_br2" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:07.524 Cannot find device "nvmf_br" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:07.524 Cannot find device "nvmf_init_if" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:07.524 Cannot find device "nvmf_init_if2" 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:07.524 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:07.783 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:07.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:07.784 00:15:07.784 --- 10.0.0.3 ping statistics --- 00:15:07.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.784 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:07.784 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:07.784 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:07.784 00:15:07.784 --- 10.0.0.4 ping statistics --- 00:15:07.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.784 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:07.784 00:15:07.784 --- 10.0.0.1 ping statistics --- 00:15:07.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.784 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:07.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:07.784 00:15:07.784 --- 10.0.0.2 ping statistics --- 00:15:07.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.784 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # return 0 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=74027 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 74027 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 74027 ']' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.784 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:08.041 [2024-10-08 09:20:59.513856] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:08.042 [2024-10-08 09:20:59.513947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.042 [2024-10-08 09:20:59.654960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.299 [2024-10-08 09:20:59.757655] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.299 [2024-10-08 09:20:59.757722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.299 [2024-10-08 09:20:59.757758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.300 [2024-10-08 09:20:59.757771] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.300 [2024-10-08 09:20:59.757781] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.300 [2024-10-08 09:20:59.758264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.866 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.866 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:08.866 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:08.866 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:08.866 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 [2024-10-08 09:21:00.644970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 Malloc0 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 [2024-10-08 09:21:00.708766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.125 [2024-10-08 09:21:00.736888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.125 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:09.383 [2024-10-08 09:21:00.910844] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:10.759 Initializing NVMe Controllers 00:15:10.759 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:10.759 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:10.759 Initialization complete. Launching workers. 00:15:10.759 ======================================================== 00:15:10.759 Latency(us) 00:15:10.759 Device Information : IOPS MiB/s Average min max 00:15:10.759 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.00 62.50 8039.69 6257.29 12066.94 00:15:10.759 ======================================================== 00:15:10.759 Total : 500.00 62.50 8039.69 6257.29 12066.94 00:15:10.759 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.759 rmmod nvme_tcp 00:15:10.759 rmmod nvme_fabrics 00:15:10.759 rmmod nvme_keyring 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 74027 ']' 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 74027 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 74027 ']' 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 74027 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74027 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74027' 00:15:10.759 killing process with pid 74027 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 74027 00:15:10.759 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 74027 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:11.018 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:11.276 00:15:11.276 real 0m4.000s 00:15:11.276 user 0m3.608s 00:15:11.276 sys 0m0.805s 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:11.276 ************************************ 00:15:11.276 END TEST nvmf_wait_for_buf 00:15:11.276 ************************************ 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:11.276 00:15:11.276 real 5m6.915s 00:15:11.276 user 10m41.788s 00:15:11.276 sys 1m7.912s 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.276 09:21:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.276 ************************************ 00:15:11.276 END TEST nvmf_target_extra 00:15:11.276 ************************************ 00:15:11.277 09:21:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:11.277 09:21:02 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:11.277 09:21:02 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.277 09:21:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.277 ************************************ 00:15:11.277 START TEST nvmf_host 00:15:11.277 ************************************ 00:15:11.277 09:21:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:11.536 * Looking for test storage... 00:15:11.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:11.536 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.537 --rc genhtml_branch_coverage=1 00:15:11.537 --rc genhtml_function_coverage=1 00:15:11.537 --rc genhtml_legend=1 00:15:11.537 --rc geninfo_all_blocks=1 00:15:11.537 --rc geninfo_unexecuted_blocks=1 00:15:11.537 00:15:11.537 ' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.537 --rc genhtml_branch_coverage=1 00:15:11.537 --rc genhtml_function_coverage=1 00:15:11.537 --rc genhtml_legend=1 00:15:11.537 --rc geninfo_all_blocks=1 00:15:11.537 --rc geninfo_unexecuted_blocks=1 00:15:11.537 00:15:11.537 ' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.537 --rc genhtml_branch_coverage=1 00:15:11.537 --rc genhtml_function_coverage=1 00:15:11.537 --rc genhtml_legend=1 00:15:11.537 --rc geninfo_all_blocks=1 00:15:11.537 --rc geninfo_unexecuted_blocks=1 00:15:11.537 00:15:11.537 ' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.537 --rc genhtml_branch_coverage=1 00:15:11.537 --rc genhtml_function_coverage=1 00:15:11.537 --rc genhtml_legend=1 00:15:11.537 --rc geninfo_all_blocks=1 00:15:11.537 --rc geninfo_unexecuted_blocks=1 00:15:11.537 00:15:11.537 ' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.537 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.537 ************************************ 00:15:11.537 START TEST nvmf_identify 00:15:11.537 ************************************ 00:15:11.537 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:11.798 * Looking for test storage... 00:15:11.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:11.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.798 --rc genhtml_branch_coverage=1 00:15:11.798 --rc genhtml_function_coverage=1 00:15:11.798 --rc genhtml_legend=1 00:15:11.798 --rc geninfo_all_blocks=1 00:15:11.798 --rc geninfo_unexecuted_blocks=1 00:15:11.798 00:15:11.798 ' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:11.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.798 --rc genhtml_branch_coverage=1 00:15:11.798 --rc genhtml_function_coverage=1 00:15:11.798 --rc genhtml_legend=1 00:15:11.798 --rc geninfo_all_blocks=1 00:15:11.798 --rc geninfo_unexecuted_blocks=1 00:15:11.798 00:15:11.798 ' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:11.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.798 --rc genhtml_branch_coverage=1 00:15:11.798 --rc genhtml_function_coverage=1 00:15:11.798 --rc genhtml_legend=1 00:15:11.798 --rc geninfo_all_blocks=1 00:15:11.798 --rc geninfo_unexecuted_blocks=1 00:15:11.798 00:15:11.798 ' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:11.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.798 --rc genhtml_branch_coverage=1 00:15:11.798 --rc genhtml_function_coverage=1 00:15:11.798 --rc genhtml_legend=1 00:15:11.798 --rc geninfo_all_blocks=1 00:15:11.798 --rc geninfo_unexecuted_blocks=1 00:15:11.798 00:15:11.798 ' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.798 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:11.799 Cannot find device "nvmf_init_br" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:11.799 Cannot find device "nvmf_init_br2" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:11.799 Cannot find device "nvmf_tgt_br" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.799 Cannot find device "nvmf_tgt_br2" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:11.799 Cannot find device "nvmf_init_br" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:11.799 Cannot find device "nvmf_init_br2" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:11.799 Cannot find device "nvmf_tgt_br" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:11.799 Cannot find device "nvmf_tgt_br2" 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:11.799 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:12.058 Cannot find device "nvmf_br" 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:12.058 Cannot find device "nvmf_init_if" 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:12.058 Cannot find device "nvmf_init_if2" 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:12.058 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:12.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:15:12.316 00:15:12.316 --- 10.0.0.3 ping statistics --- 00:15:12.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.316 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:12.316 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:12.316 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:12.316 00:15:12.316 --- 10.0.0.4 ping statistics --- 00:15:12.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.316 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:15:12.316 00:15:12.316 --- 10.0.0.1 ping statistics --- 00:15:12.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.316 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:12.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:12.316 00:15:12.316 --- 10.0.0.2 ping statistics --- 00:15:12.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.316 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # return 0 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74347 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74347 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74347 ']' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.316 09:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:12.316 [2024-10-08 09:21:03.855160] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:12.316 [2024-10-08 09:21:03.855247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.316 [2024-10-08 09:21:03.998431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.576 [2024-10-08 09:21:04.106279] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.576 [2024-10-08 09:21:04.106627] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.576 [2024-10-08 09:21:04.106727] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.576 [2024-10-08 09:21:04.106864] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.576 [2024-10-08 09:21:04.106961] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.576 [2024-10-08 09:21:04.108295] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.576 [2024-10-08 09:21:04.108428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.576 [2024-10-08 09:21:04.108558] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.576 [2024-10-08 09:21:04.108627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.576 [2024-10-08 09:21:04.165721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.513 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.513 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:15:13.513 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 [2024-10-08 09:21:04.906328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 Malloc0 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.514 09:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 [2024-10-08 09:21:05.007638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.514 [ 00:15:13.514 { 00:15:13.514 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.514 "subtype": "Discovery", 00:15:13.514 "listen_addresses": [ 00:15:13.514 { 00:15:13.514 "trtype": "TCP", 00:15:13.514 "adrfam": "IPv4", 00:15:13.514 "traddr": "10.0.0.3", 00:15:13.514 "trsvcid": "4420" 00:15:13.514 } 00:15:13.514 ], 00:15:13.514 "allow_any_host": true, 00:15:13.514 "hosts": [] 00:15:13.514 }, 00:15:13.514 { 00:15:13.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.514 "subtype": "NVMe", 00:15:13.514 "listen_addresses": [ 00:15:13.514 { 00:15:13.514 "trtype": "TCP", 00:15:13.514 "adrfam": "IPv4", 00:15:13.514 "traddr": "10.0.0.3", 00:15:13.514 "trsvcid": "4420" 00:15:13.514 } 00:15:13.514 ], 00:15:13.514 "allow_any_host": true, 00:15:13.514 "hosts": [], 00:15:13.514 "serial_number": "SPDK00000000000001", 00:15:13.514 "model_number": "SPDK bdev Controller", 00:15:13.514 "max_namespaces": 32, 00:15:13.514 "min_cntlid": 1, 00:15:13.514 "max_cntlid": 65519, 00:15:13.514 "namespaces": [ 00:15:13.514 { 00:15:13.514 "nsid": 1, 00:15:13.514 "bdev_name": "Malloc0", 00:15:13.514 "name": "Malloc0", 00:15:13.514 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:13.514 "eui64": "ABCDEF0123456789", 00:15:13.514 "uuid": "97d225e5-b1cb-4652-8b54-74607c7bd913" 00:15:13.514 } 00:15:13.514 ] 00:15:13.514 } 00:15:13.514 ] 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.514 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:13.514 [2024-10-08 09:21:05.063679] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:13.514 [2024-10-08 09:21:05.063736] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74382 ] 00:15:13.817 [2024-10-08 09:21:05.202071] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:13.817 [2024-10-08 09:21:05.202165] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:13.817 [2024-10-08 09:21:05.202173] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:13.817 [2024-10-08 09:21:05.202185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:13.817 [2024-10-08 09:21:05.202195] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:13.817 [2024-10-08 09:21:05.202510] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:13.817 [2024-10-08 09:21:05.202598] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x242a750 0 00:15:13.817 [2024-10-08 09:21:05.209794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:13.817 [2024-10-08 09:21:05.209821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:13.817 [2024-10-08 09:21:05.209843] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:13.817 [2024-10-08 09:21:05.209847] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:13.817 [2024-10-08 09:21:05.209888] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.209896] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.209900] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.817 [2024-10-08 09:21:05.209914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:13.817 [2024-10-08 09:21:05.209946] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.817 [2024-10-08 09:21:05.217780] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.817 [2024-10-08 09:21:05.217802] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.817 [2024-10-08 09:21:05.217822] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.217827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.817 [2024-10-08 09:21:05.217841] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:13.817 [2024-10-08 09:21:05.217850] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:13.817 [2024-10-08 09:21:05.217856] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:13.817 [2024-10-08 09:21:05.217877] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.217882] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.217886] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.817 [2024-10-08 09:21:05.217896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.817 [2024-10-08 09:21:05.217923] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.817 [2024-10-08 09:21:05.217979] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.817 [2024-10-08 09:21:05.217987] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.817 [2024-10-08 09:21:05.217990] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.217994] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.817 [2024-10-08 09:21:05.218000] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:13.817 [2024-10-08 09:21:05.218008] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:13.817 [2024-10-08 09:21:05.218016] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218020] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218024] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.817 [2024-10-08 09:21:05.218031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.817 [2024-10-08 09:21:05.218065] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.817 [2024-10-08 09:21:05.218110] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.817 [2024-10-08 09:21:05.218117] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.817 [2024-10-08 09:21:05.218121] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218125] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.817 [2024-10-08 09:21:05.218131] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:13.817 [2024-10-08 09:21:05.218140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:13.817 [2024-10-08 09:21:05.218148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218156] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.817 [2024-10-08 09:21:05.218163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.817 [2024-10-08 09:21:05.218180] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.817 [2024-10-08 09:21:05.218226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.817 [2024-10-08 09:21:05.218232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.817 [2024-10-08 09:21:05.218265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.817 [2024-10-08 09:21:05.218282] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:13.817 [2024-10-08 09:21:05.218298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218312] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.817 [2024-10-08 09:21:05.218323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.817 [2024-10-08 09:21:05.218351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.817 [2024-10-08 09:21:05.218399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.817 [2024-10-08 09:21:05.218406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.817 [2024-10-08 09:21:05.218411] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218415] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.817 [2024-10-08 09:21:05.218420] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:13.817 [2024-10-08 09:21:05.218426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:13.817 [2024-10-08 09:21:05.218434] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:13.817 [2024-10-08 09:21:05.218540] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:13.817 [2024-10-08 09:21:05.218546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:13.817 [2024-10-08 09:21:05.218578] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218597] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218601] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.817 [2024-10-08 09:21:05.218609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.817 [2024-10-08 09:21:05.218627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.817 [2024-10-08 09:21:05.218678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.817 [2024-10-08 09:21:05.218685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.817 [2024-10-08 09:21:05.218689] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218693] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.817 [2024-10-08 09:21:05.218698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:13.817 [2024-10-08 09:21:05.218708] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.817 [2024-10-08 09:21:05.218716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.817 [2024-10-08 09:21:05.218724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.817 [2024-10-08 09:21:05.218740] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.817 [2024-10-08 09:21:05.218803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.817 [2024-10-08 09:21:05.218811] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.818 [2024-10-08 09:21:05.218815] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.218819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.818 [2024-10-08 09:21:05.218824] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:13.818 [2024-10-08 09:21:05.218829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:13.818 [2024-10-08 09:21:05.218837] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:13.818 [2024-10-08 09:21:05.218865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:13.818 [2024-10-08 09:21:05.218876] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.218881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.218889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.818 [2024-10-08 09:21:05.218911] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.818 [2024-10-08 09:21:05.219008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.818 [2024-10-08 09:21:05.219016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.818 [2024-10-08 09:21:05.219019] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219023] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242a750): datao=0, datal=4096, cccid=0 00:15:13.818 [2024-10-08 09:21:05.219029] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248e840) on tqpair(0x242a750): expected_datao=0, payload_size=4096 00:15:13.818 [2024-10-08 09:21:05.219034] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219042] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219047] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219056] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.818 [2024-10-08 09:21:05.219062] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.818 [2024-10-08 09:21:05.219065] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219069] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.818 [2024-10-08 09:21:05.219078] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:13.818 [2024-10-08 09:21:05.219084] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:13.818 [2024-10-08 09:21:05.219089] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:13.818 [2024-10-08 09:21:05.219094] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:13.818 [2024-10-08 09:21:05.219099] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:13.818 [2024-10-08 09:21:05.219104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:13.818 [2024-10-08 09:21:05.219125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:13.818 [2024-10-08 09:21:05.219148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219154] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:13.818 [2024-10-08 09:21:05.219197] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.818 [2024-10-08 09:21:05.219275] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.818 [2024-10-08 09:21:05.219282] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.818 [2024-10-08 09:21:05.219285] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.818 [2024-10-08 09:21:05.219298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219302] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219306] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.818 [2024-10-08 09:21:05.219319] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219323] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219327] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.818 [2024-10-08 09:21:05.219339] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.818 [2024-10-08 09:21:05.219359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219363] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.818 [2024-10-08 09:21:05.219378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:13.818 [2024-10-08 09:21:05.219391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:13.818 [2024-10-08 09:21:05.219399] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.818 [2024-10-08 09:21:05.219430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e840, cid 0, qid 0 00:15:13.818 [2024-10-08 09:21:05.219437] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e9c0, cid 1, qid 0 00:15:13.818 [2024-10-08 09:21:05.219442] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248eb40, cid 2, qid 0 00:15:13.818 [2024-10-08 09:21:05.219447] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.818 [2024-10-08 09:21:05.219452] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ee40, cid 4, qid 0 00:15:13.818 [2024-10-08 09:21:05.219540] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.818 [2024-10-08 09:21:05.219547] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.818 [2024-10-08 09:21:05.219551] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ee40) on tqpair=0x242a750 00:15:13.818 [2024-10-08 09:21:05.219561] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:13.818 [2024-10-08 09:21:05.219566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:13.818 [2024-10-08 09:21:05.219578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219583] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.818 [2024-10-08 09:21:05.219608] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ee40, cid 4, qid 0 00:15:13.818 [2024-10-08 09:21:05.219685] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.818 [2024-10-08 09:21:05.219691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.818 [2024-10-08 09:21:05.219695] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219699] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242a750): datao=0, datal=4096, cccid=4 00:15:13.818 [2024-10-08 09:21:05.219703] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248ee40) on tqpair(0x242a750): expected_datao=0, payload_size=4096 00:15:13.818 [2024-10-08 09:21:05.219708] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219714] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219718] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.818 [2024-10-08 09:21:05.219733] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.818 [2024-10-08 09:21:05.219736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219740] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ee40) on tqpair=0x242a750 00:15:13.818 [2024-10-08 09:21:05.219753] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:13.818 [2024-10-08 09:21:05.219781] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219800] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.818 [2024-10-08 09:21:05.219817] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219825] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242a750) 00:15:13.818 [2024-10-08 09:21:05.219831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.818 [2024-10-08 09:21:05.219856] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ee40, cid 4, qid 0 00:15:13.818 [2024-10-08 09:21:05.219863] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248efc0, cid 5, qid 0 00:15:13.818 [2024-10-08 09:21:05.219957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.818 [2024-10-08 09:21:05.219964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.818 [2024-10-08 09:21:05.219968] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219971] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242a750): datao=0, datal=1024, cccid=4 00:15:13.818 [2024-10-08 09:21:05.219976] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248ee40) on tqpair(0x242a750): expected_datao=0, payload_size=1024 00:15:13.818 [2024-10-08 09:21:05.219980] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219987] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.818 [2024-10-08 09:21:05.219991] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.219996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.819 [2024-10-08 09:21:05.220002] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.819 [2024-10-08 09:21:05.220005] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220009] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248efc0) on tqpair=0x242a750 00:15:13.819 [2024-10-08 09:21:05.220026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.819 [2024-10-08 09:21:05.220034] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.819 [2024-10-08 09:21:05.220037] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220041] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ee40) on tqpair=0x242a750 00:15:13.819 [2024-10-08 09:21:05.220053] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220057] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242a750) 00:15:13.819 [2024-10-08 09:21:05.220065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.819 [2024-10-08 09:21:05.220087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ee40, cid 4, qid 0 00:15:13.819 [2024-10-08 09:21:05.220166] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.819 [2024-10-08 09:21:05.220173] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.819 [2024-10-08 09:21:05.220177] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220181] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242a750): datao=0, datal=3072, cccid=4 00:15:13.819 [2024-10-08 09:21:05.220185] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248ee40) on tqpair(0x242a750): expected_datao=0, payload_size=3072 00:15:13.819 [2024-10-08 09:21:05.220190] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220197] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220201] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.819 [2024-10-08 09:21:05.220216] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.819 [2024-10-08 09:21:05.220219] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ee40) on tqpair=0x242a750 00:15:13.819 [2024-10-08 09:21:05.220233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242a750) 00:15:13.819 [2024-10-08 09:21:05.220245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.819 [2024-10-08 09:21:05.220270] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ee40, cid 4, qid 0 00:15:13.819 [2024-10-08 09:21:05.220336] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.819 [2024-10-08 09:21:05.220343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.819 ===================================================== 00:15:13.819 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:13.819 ===================================================== 00:15:13.819 Controller Capabilities/Features 00:15:13.819 ================================ 00:15:13.819 Vendor ID: 0000 00:15:13.819 Subsystem Vendor ID: 0000 00:15:13.819 Serial Number: .................... 00:15:13.819 Model Number: ........................................ 00:15:13.819 Firmware Version: 25.01 00:15:13.819 Recommended Arb Burst: 0 00:15:13.819 IEEE OUI Identifier: 00 00 00 00:15:13.819 Multi-path I/O 00:15:13.819 May have multiple subsystem ports: No 00:15:13.819 May have multiple controllers: No 00:15:13.819 Associated with SR-IOV VF: No 00:15:13.819 Max Data Transfer Size: 131072 00:15:13.819 Max Number of Namespaces: 0 00:15:13.819 Max Number of I/O Queues: 1024 00:15:13.819 NVMe Specification Version (VS): 1.3 00:15:13.819 NVMe Specification Version (Identify): 1.3 00:15:13.819 Maximum Queue Entries: 128 00:15:13.819 Contiguous Queues Required: Yes 00:15:13.819 Arbitration Mechanisms Supported 00:15:13.819 Weighted Round Robin: Not Supported 00:15:13.819 Vendor Specific: Not Supported 00:15:13.819 Reset Timeout: 15000 ms 00:15:13.819 Doorbell Stride: 4 bytes 00:15:13.819 NVM Subsystem Reset: Not Supported 00:15:13.819 Command Sets Supported 00:15:13.819 NVM Command Set: Supported 00:15:13.819 Boot Partition: Not Supported 00:15:13.819 Memory Page Size Minimum: 4096 bytes 00:15:13.819 Memory Page Size Maximum: 4096 bytes 00:15:13.819 Persistent Memory Region: Not Supported 00:15:13.819 Optional Asynchronous Events Supported 00:15:13.819 Namespace Attribute Notices: Not Supported 00:15:13.819 Firmware Activation Notices: Not Supported 00:15:13.819 ANA Change Notices: Not Supported 00:15:13.819 PLE Aggregate Log Change Notices: Not Supported 00:15:13.819 LBA Status Info Alert Notices: Not Supported 00:15:13.819 EGE Aggregate Log Change Notices: Not Supported 00:15:13.819 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.819 Zone Descriptor Change Notices: Not Supported 00:15:13.819 Discovery Log Change Notices: Supported 00:15:13.819 Controller Attributes 00:15:13.819 128-bit Host Identifier: Not Supported 00:15:13.819 Non-Operational Permissive Mode: Not Supported 00:15:13.819 NVM Sets: Not Supported 00:15:13.819 Read Recovery Levels: Not Supported 00:15:13.819 Endurance Groups: Not Supported 00:15:13.819 Predictable Latency Mode: Not Supported 00:15:13.819 Traffic Based Keep ALive: Not Supported 00:15:13.819 Namespace Granularity: Not Supported 00:15:13.819 SQ Associations: Not Supported 00:15:13.819 UUID List: Not Supported 00:15:13.819 Multi-Domain Subsystem: Not Supported 00:15:13.819 Fixed Capacity Management: Not Supported 00:15:13.819 Variable Capacity Management: Not Supported 00:15:13.819 Delete Endurance Group: Not Supported 00:15:13.819 Delete NVM Set: Not Supported 00:15:13.819 Extended LBA Formats Supported: Not Supported 00:15:13.819 Flexible Data Placement Supported: Not Supported 00:15:13.819 00:15:13.819 Controller Memory Buffer Support 00:15:13.819 ================================ 00:15:13.819 Supported: No 00:15:13.819 00:15:13.819 Persistent Memory Region Support 00:15:13.819 ================================ 00:15:13.819 Supported: No 00:15:13.819 00:15:13.819 Admin Command Set Attributes 00:15:13.819 ============================ 00:15:13.819 Security Send/Receive: Not Supported 00:15:13.819 Format NVM: Not Supported 00:15:13.819 Firmware Activate/Download: Not Supported 00:15:13.819 Namespace Management: Not Supported 00:15:13.819 Device Self-Test: Not Supported 00:15:13.819 Directives: Not Supported 00:15:13.819 NVMe-MI: Not Supported 00:15:13.819 Virtualization Management: Not Supported 00:15:13.819 Doorbell Buffer Config: Not Supported 00:15:13.819 Get LBA Status Capability: Not Supported 00:15:13.819 Command & Feature Lockdown Capability: Not Supported 00:15:13.819 Abort Command Limit: 1 00:15:13.819 Async Event Request Limit: 4 00:15:13.819 Number of Firmware Slots: N/A 00:15:13.819 Firmware Slot 1 Read-Only: N/A 00:15:13.819 Firm[2024-10-08 09:21:05.220347] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220351] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242a750): datao=0, datal=8, cccid=4 00:15:13.819 [2024-10-08 09:21:05.220355] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248ee40) on tqpair(0x242a750): expected_datao=0, payload_size=8 00:15:13.819 [2024-10-08 09:21:05.220360] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220367] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220371] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.819 [2024-10-08 09:21:05.220392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.819 [2024-10-08 09:21:05.220396] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.819 [2024-10-08 09:21:05.220400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ee40) on tqpair=0x242a750 00:15:13.819 ware Activation Without Reset: N/A 00:15:13.819 Multiple Update Detection Support: N/A 00:15:13.819 Firmware Update Granularity: No Information Provided 00:15:13.819 Per-Namespace SMART Log: No 00:15:13.819 Asymmetric Namespace Access Log Page: Not Supported 00:15:13.819 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:13.819 Command Effects Log Page: Not Supported 00:15:13.819 Get Log Page Extended Data: Supported 00:15:13.819 Telemetry Log Pages: Not Supported 00:15:13.819 Persistent Event Log Pages: Not Supported 00:15:13.819 Supported Log Pages Log Page: May Support 00:15:13.819 Commands Supported & Effects Log Page: Not Supported 00:15:13.819 Feature Identifiers & Effects Log Page:May Support 00:15:13.819 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.819 Data Area 4 for Telemetry Log: Not Supported 00:15:13.819 Error Log Page Entries Supported: 128 00:15:13.819 Keep Alive: Not Supported 00:15:13.819 00:15:13.819 NVM Command Set Attributes 00:15:13.819 ========================== 00:15:13.819 Submission Queue Entry Size 00:15:13.819 Max: 1 00:15:13.819 Min: 1 00:15:13.819 Completion Queue Entry Size 00:15:13.819 Max: 1 00:15:13.819 Min: 1 00:15:13.819 Number of Namespaces: 0 00:15:13.819 Compare Command: Not Supported 00:15:13.819 Write Uncorrectable Command: Not Supported 00:15:13.819 Dataset Management Command: Not Supported 00:15:13.819 Write Zeroes Command: Not Supported 00:15:13.819 Set Features Save Field: Not Supported 00:15:13.819 Reservations: Not Supported 00:15:13.819 Timestamp: Not Supported 00:15:13.819 Copy: Not Supported 00:15:13.819 Volatile Write Cache: Not Present 00:15:13.819 Atomic Write Unit (Normal): 1 00:15:13.819 Atomic Write Unit (PFail): 1 00:15:13.819 Atomic Compare & Write Unit: 1 00:15:13.819 Fused Compare & Write: Supported 00:15:13.820 Scatter-Gather List 00:15:13.820 SGL Command Set: Supported 00:15:13.820 SGL Keyed: Supported 00:15:13.820 SGL Bit Bucket Descriptor: Not Supported 00:15:13.820 SGL Metadata Pointer: Not Supported 00:15:13.820 Oversized SGL: Not Supported 00:15:13.820 SGL Metadata Address: Not Supported 00:15:13.820 SGL Offset: Supported 00:15:13.820 Transport SGL Data Block: Not Supported 00:15:13.820 Replay Protected Memory Block: Not Supported 00:15:13.820 00:15:13.820 Firmware Slot Information 00:15:13.820 ========================= 00:15:13.820 Active slot: 0 00:15:13.820 00:15:13.820 00:15:13.820 Error Log 00:15:13.820 ========= 00:15:13.820 00:15:13.820 Active Namespaces 00:15:13.820 ================= 00:15:13.820 Discovery Log Page 00:15:13.820 ================== 00:15:13.820 Generation Counter: 2 00:15:13.820 Number of Records: 2 00:15:13.820 Record Format: 0 00:15:13.820 00:15:13.820 Discovery Log Entry 0 00:15:13.820 ---------------------- 00:15:13.820 Transport Type: 3 (TCP) 00:15:13.820 Address Family: 1 (IPv4) 00:15:13.820 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:13.820 Entry Flags: 00:15:13.820 Duplicate Returned Information: 1 00:15:13.820 Explicit Persistent Connection Support for Discovery: 1 00:15:13.820 Transport Requirements: 00:15:13.820 Secure Channel: Not Required 00:15:13.820 Port ID: 0 (0x0000) 00:15:13.820 Controller ID: 65535 (0xffff) 00:15:13.820 Admin Max SQ Size: 128 00:15:13.820 Transport Service Identifier: 4420 00:15:13.820 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:13.820 Transport Address: 10.0.0.3 00:15:13.820 Discovery Log Entry 1 00:15:13.820 ---------------------- 00:15:13.820 Transport Type: 3 (TCP) 00:15:13.820 Address Family: 1 (IPv4) 00:15:13.820 Subsystem Type: 2 (NVM Subsystem) 00:15:13.820 Entry Flags: 00:15:13.820 Duplicate Returned Information: 0 00:15:13.820 Explicit Persistent Connection Support for Discovery: 0 00:15:13.820 Transport Requirements: 00:15:13.820 Secure Channel: Not Required 00:15:13.820 Port ID: 0 (0x0000) 00:15:13.820 Controller ID: 65535 (0xffff) 00:15:13.820 Admin Max SQ Size: 128 00:15:13.820 Transport Service Identifier: 4420 00:15:13.820 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:13.820 Transport Address: 10.0.0.3 [2024-10-08 09:21:05.220542] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:13.820 [2024-10-08 09:21:05.220560] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e840) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.220568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.820 [2024-10-08 09:21:05.220574] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e9c0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.220579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.820 [2024-10-08 09:21:05.220584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248eb40) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.220589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.820 [2024-10-08 09:21:05.220594] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.220598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.820 [2024-10-08 09:21:05.220608] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220612] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220616] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.220624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.820 [2024-10-08 09:21:05.220649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.820 [2024-10-08 09:21:05.220703] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.820 [2024-10-08 09:21:05.220710] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.820 [2024-10-08 09:21:05.220714] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220718] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.220725] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220730] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220751] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.220760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.820 [2024-10-08 09:21:05.220785] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.820 [2024-10-08 09:21:05.220852] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.820 [2024-10-08 09:21:05.220859] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.820 [2024-10-08 09:21:05.220863] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.220872] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:13.820 [2024-10-08 09:21:05.220877] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:13.820 [2024-10-08 09:21:05.220886] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220891] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220895] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.220902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.820 [2024-10-08 09:21:05.220935] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.820 [2024-10-08 09:21:05.220980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.820 [2024-10-08 09:21:05.220987] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.820 [2024-10-08 09:21:05.220991] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.220995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.221006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221011] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.221022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.820 [2024-10-08 09:21:05.221038] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.820 [2024-10-08 09:21:05.221080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.820 [2024-10-08 09:21:05.221087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.820 [2024-10-08 09:21:05.221091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.221121] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221127] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221131] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.221138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.820 [2024-10-08 09:21:05.221155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.820 [2024-10-08 09:21:05.221202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.820 [2024-10-08 09:21:05.221209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.820 [2024-10-08 09:21:05.221213] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221217] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.221227] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221236] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.221244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.820 [2024-10-08 09:21:05.221260] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.820 [2024-10-08 09:21:05.221321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.820 [2024-10-08 09:21:05.221328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.820 [2024-10-08 09:21:05.221332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.221346] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221355] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.221362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.820 [2024-10-08 09:21:05.221378] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.820 [2024-10-08 09:21:05.221423] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.820 [2024-10-08 09:21:05.221430] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.820 [2024-10-08 09:21:05.221434] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221438] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.820 [2024-10-08 09:21:05.221448] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221468] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.820 [2024-10-08 09:21:05.221471] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.820 [2024-10-08 09:21:05.221479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.821 [2024-10-08 09:21:05.221494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.821 [2024-10-08 09:21:05.221535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.221542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.221546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221550] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.821 [2024-10-08 09:21:05.221559] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221568] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.821 [2024-10-08 09:21:05.221575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.821 [2024-10-08 09:21:05.221590] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.821 [2024-10-08 09:21:05.221635] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.221641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.221645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.821 [2024-10-08 09:21:05.221658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.821 [2024-10-08 09:21:05.221674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.821 [2024-10-08 09:21:05.221689] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.821 [2024-10-08 09:21:05.221733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.221740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.221743] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221747] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.821 [2024-10-08 09:21:05.221757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221762] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.221765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.821 [2024-10-08 09:21:05.221773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.821 [2024-10-08 09:21:05.221789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.821 [2024-10-08 09:21:05.225815] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.225826] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.225830] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.225835] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.821 [2024-10-08 09:21:05.225848] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.225853] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.225858] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242a750) 00:15:13.821 [2024-10-08 09:21:05.225866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.821 [2024-10-08 09:21:05.225891] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ecc0, cid 3, qid 0 00:15:13.821 [2024-10-08 09:21:05.225941] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.225948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.225951] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.225955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ecc0) on tqpair=0x242a750 00:15:13.821 [2024-10-08 09:21:05.225963] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:15:13.821 00:15:13.821 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:13.821 [2024-10-08 09:21:05.270284] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:13.821 [2024-10-08 09:21:05.270346] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74389 ] 00:15:13.821 [2024-10-08 09:21:05.407745] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:13.821 [2024-10-08 09:21:05.407834] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:13.821 [2024-10-08 09:21:05.407841] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:13.821 [2024-10-08 09:21:05.407851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:13.821 [2024-10-08 09:21:05.407860] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:13.821 [2024-10-08 09:21:05.408124] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:13.821 [2024-10-08 09:21:05.408207] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x76a750 0 00:15:13.821 [2024-10-08 09:21:05.419814] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:13.821 [2024-10-08 09:21:05.419842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:13.821 [2024-10-08 09:21:05.419849] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:13.821 [2024-10-08 09:21:05.419852] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:13.821 [2024-10-08 09:21:05.419894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.419902] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.419907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.821 [2024-10-08 09:21:05.419919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:13.821 [2024-10-08 09:21:05.419953] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.821 [2024-10-08 09:21:05.427751] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.427775] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.427780] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.427785] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.821 [2024-10-08 09:21:05.427801] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:13.821 [2024-10-08 09:21:05.427810] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:13.821 [2024-10-08 09:21:05.427817] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:13.821 [2024-10-08 09:21:05.427833] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.427839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.427843] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.821 [2024-10-08 09:21:05.427853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.821 [2024-10-08 09:21:05.427882] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.821 [2024-10-08 09:21:05.427940] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.427948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.427952] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.427957] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.821 [2024-10-08 09:21:05.427963] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:13.821 [2024-10-08 09:21:05.427971] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:13.821 [2024-10-08 09:21:05.427980] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.427984] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.427988] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.821 [2024-10-08 09:21:05.427997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.821 [2024-10-08 09:21:05.428017] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.821 [2024-10-08 09:21:05.428356] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.821 [2024-10-08 09:21:05.428373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.821 [2024-10-08 09:21:05.428378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.821 [2024-10-08 09:21:05.428383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.821 [2024-10-08 09:21:05.428389] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:13.821 [2024-10-08 09:21:05.428399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:13.822 [2024-10-08 09:21:05.428407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428412] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428416] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.428424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.822 [2024-10-08 09:21:05.428445] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.822 [2024-10-08 09:21:05.428490] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.822 [2024-10-08 09:21:05.428498] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.822 [2024-10-08 09:21:05.428501] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428506] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.822 [2024-10-08 09:21:05.428512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:13.822 [2024-10-08 09:21:05.428523] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428531] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.428539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.822 [2024-10-08 09:21:05.428557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.822 [2024-10-08 09:21:05.428748] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.822 [2024-10-08 09:21:05.428764] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.822 [2024-10-08 09:21:05.428769] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.822 [2024-10-08 09:21:05.428779] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:13.822 [2024-10-08 09:21:05.428784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:13.822 [2024-10-08 09:21:05.428793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:13.822 [2024-10-08 09:21:05.428900] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:13.822 [2024-10-08 09:21:05.428904] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:13.822 [2024-10-08 09:21:05.428915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428920] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.428924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.428932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.822 [2024-10-08 09:21:05.428955] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.822 [2024-10-08 09:21:05.429381] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.822 [2024-10-08 09:21:05.429396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.822 [2024-10-08 09:21:05.429402] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429406] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.822 [2024-10-08 09:21:05.429412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:13.822 [2024-10-08 09:21:05.429423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429433] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.429440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.822 [2024-10-08 09:21:05.429461] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.822 [2024-10-08 09:21:05.429515] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.822 [2024-10-08 09:21:05.429523] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.822 [2024-10-08 09:21:05.429526] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429531] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.822 [2024-10-08 09:21:05.429536] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:13.822 [2024-10-08 09:21:05.429541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:13.822 [2024-10-08 09:21:05.429550] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:13.822 [2024-10-08 09:21:05.429565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:13.822 [2024-10-08 09:21:05.429577] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429582] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.429590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.822 [2024-10-08 09:21:05.429610] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.822 [2024-10-08 09:21:05.429842] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.822 [2024-10-08 09:21:05.429852] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.822 [2024-10-08 09:21:05.429856] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429861] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=4096, cccid=0 00:15:13.822 [2024-10-08 09:21:05.429866] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7ce840) on tqpair(0x76a750): expected_datao=0, payload_size=4096 00:15:13.822 [2024-10-08 09:21:05.429871] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429878] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429884] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.429997] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.822 [2024-10-08 09:21:05.430004] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.822 [2024-10-08 09:21:05.430008] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.822 [2024-10-08 09:21:05.430031] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:13.822 [2024-10-08 09:21:05.430037] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:13.822 [2024-10-08 09:21:05.430042] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:13.822 [2024-10-08 09:21:05.430047] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:13.822 [2024-10-08 09:21:05.430052] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:13.822 [2024-10-08 09:21:05.430057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:13.822 [2024-10-08 09:21:05.430067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:13.822 [2024-10-08 09:21:05.430080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.430098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:13.822 [2024-10-08 09:21:05.430122] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.822 [2024-10-08 09:21:05.430411] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.822 [2024-10-08 09:21:05.430431] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.822 [2024-10-08 09:21:05.430436] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.822 [2024-10-08 09:21:05.430449] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430454] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430458] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.430466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.822 [2024-10-08 09:21:05.430473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430481] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.430487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.822 [2024-10-08 09:21:05.430494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430498] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.430508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.822 [2024-10-08 09:21:05.430514] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430518] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430522] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.430528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.822 [2024-10-08 09:21:05.430534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:13.822 [2024-10-08 09:21:05.430549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:13.822 [2024-10-08 09:21:05.430557] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.822 [2024-10-08 09:21:05.430561] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x76a750) 00:15:13.822 [2024-10-08 09:21:05.430569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.822 [2024-10-08 09:21:05.430594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce840, cid 0, qid 0 00:15:13.822 [2024-10-08 09:21:05.430602] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ce9c0, cid 1, qid 0 00:15:13.822 [2024-10-08 09:21:05.430607] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ceb40, cid 2, qid 0 00:15:13.823 [2024-10-08 09:21:05.430613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.823 [2024-10-08 09:21:05.430618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cee40, cid 4, qid 0 00:15:13.823 [2024-10-08 09:21:05.431158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.823 [2024-10-08 09:21:05.431175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.823 [2024-10-08 09:21:05.431181] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.431185] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cee40) on tqpair=0x76a750 00:15:13.823 [2024-10-08 09:21:05.431191] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:13.823 [2024-10-08 09:21:05.431197] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.431212] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.431220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.431227] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.431232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.431236] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x76a750) 00:15:13.823 [2024-10-08 09:21:05.431245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:13.823 [2024-10-08 09:21:05.431268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cee40, cid 4, qid 0 00:15:13.823 [2024-10-08 09:21:05.431318] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.823 [2024-10-08 09:21:05.431325] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.823 [2024-10-08 09:21:05.431329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.431333] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cee40) on tqpair=0x76a750 00:15:13.823 [2024-10-08 09:21:05.431401] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.431414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.431423] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.431427] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x76a750) 00:15:13.823 [2024-10-08 09:21:05.431435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.823 [2024-10-08 09:21:05.431457] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cee40, cid 4, qid 0 00:15:13.823 [2024-10-08 09:21:05.434774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.823 [2024-10-08 09:21:05.434792] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.823 [2024-10-08 09:21:05.434797] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.434802] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=4096, cccid=4 00:15:13.823 [2024-10-08 09:21:05.434807] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7cee40) on tqpair(0x76a750): expected_datao=0, payload_size=4096 00:15:13.823 [2024-10-08 09:21:05.434812] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.434820] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.434824] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.434831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.823 [2024-10-08 09:21:05.434837] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.823 [2024-10-08 09:21:05.434841] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.434845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cee40) on tqpair=0x76a750 00:15:13.823 [2024-10-08 09:21:05.434867] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:13.823 [2024-10-08 09:21:05.434880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.434893] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.434903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.434908] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x76a750) 00:15:13.823 [2024-10-08 09:21:05.434917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.823 [2024-10-08 09:21:05.434944] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cee40, cid 4, qid 0 00:15:13.823 [2024-10-08 09:21:05.435249] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.823 [2024-10-08 09:21:05.435265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.823 [2024-10-08 09:21:05.435271] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435275] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=4096, cccid=4 00:15:13.823 [2024-10-08 09:21:05.435280] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7cee40) on tqpair(0x76a750): expected_datao=0, payload_size=4096 00:15:13.823 [2024-10-08 09:21:05.435285] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435292] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435296] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435373] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.823 [2024-10-08 09:21:05.435380] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.823 [2024-10-08 09:21:05.435383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435388] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cee40) on tqpair=0x76a750 00:15:13.823 [2024-10-08 09:21:05.435401] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x76a750) 00:15:13.823 [2024-10-08 09:21:05.435433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.823 [2024-10-08 09:21:05.435456] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cee40, cid 4, qid 0 00:15:13.823 [2024-10-08 09:21:05.435820] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.823 [2024-10-08 09:21:05.435836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.823 [2024-10-08 09:21:05.435841] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435845] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=4096, cccid=4 00:15:13.823 [2024-10-08 09:21:05.435850] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7cee40) on tqpair(0x76a750): expected_datao=0, payload_size=4096 00:15:13.823 [2024-10-08 09:21:05.435855] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435863] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435868] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.823 [2024-10-08 09:21:05.435884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.823 [2024-10-08 09:21:05.435887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cee40) on tqpair=0x76a750 00:15:13.823 [2024-10-08 09:21:05.435907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435952] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:13.823 [2024-10-08 09:21:05.435957] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:13.823 [2024-10-08 09:21:05.435962] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:13.823 [2024-10-08 09:21:05.435978] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.435983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x76a750) 00:15:13.823 [2024-10-08 09:21:05.435991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.823 [2024-10-08 09:21:05.435999] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.436003] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.436007] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x76a750) 00:15:13.823 [2024-10-08 09:21:05.436015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.823 [2024-10-08 09:21:05.436047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cee40, cid 4, qid 0 00:15:13.823 [2024-10-08 09:21:05.436056] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cefc0, cid 5, qid 0 00:15:13.823 [2024-10-08 09:21:05.436378] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.823 [2024-10-08 09:21:05.436394] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.823 [2024-10-08 09:21:05.436399] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.436403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cee40) on tqpair=0x76a750 00:15:13.823 [2024-10-08 09:21:05.436411] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.823 [2024-10-08 09:21:05.436417] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.823 [2024-10-08 09:21:05.436421] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.436426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cefc0) on tqpair=0x76a750 00:15:13.823 [2024-10-08 09:21:05.436438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.823 [2024-10-08 09:21:05.436443] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x76a750) 00:15:13.823 [2024-10-08 09:21:05.436450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.823 [2024-10-08 09:21:05.436472] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cefc0, cid 5, qid 0 00:15:13.823 [2024-10-08 09:21:05.436524] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.824 [2024-10-08 09:21:05.436531] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.824 [2024-10-08 09:21:05.436535] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.436539] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cefc0) on tqpair=0x76a750 00:15:13.824 [2024-10-08 09:21:05.436550] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.436555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x76a750) 00:15:13.824 [2024-10-08 09:21:05.436562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.824 [2024-10-08 09:21:05.436581] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cefc0, cid 5, qid 0 00:15:13.824 [2024-10-08 09:21:05.437066] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.824 [2024-10-08 09:21:05.437082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.824 [2024-10-08 09:21:05.437087] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437092] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cefc0) on tqpair=0x76a750 00:15:13.824 [2024-10-08 09:21:05.437104] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437108] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x76a750) 00:15:13.824 [2024-10-08 09:21:05.437116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.824 [2024-10-08 09:21:05.437138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cefc0, cid 5, qid 0 00:15:13.824 [2024-10-08 09:21:05.437204] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.824 [2024-10-08 09:21:05.437227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.824 [2024-10-08 09:21:05.437230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cefc0) on tqpair=0x76a750 00:15:13.824 [2024-10-08 09:21:05.437256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437262] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x76a750) 00:15:13.824 [2024-10-08 09:21:05.437270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.824 [2024-10-08 09:21:05.437278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437282] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x76a750) 00:15:13.824 [2024-10-08 09:21:05.437289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.824 [2024-10-08 09:21:05.437297] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437302] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x76a750) 00:15:13.824 [2024-10-08 09:21:05.437308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.824 [2024-10-08 09:21:05.437316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x76a750) 00:15:13.824 [2024-10-08 09:21:05.437327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.824 [2024-10-08 09:21:05.437349] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cefc0, cid 5, qid 0 00:15:13.824 [2024-10-08 09:21:05.437357] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cee40, cid 4, qid 0 00:15:13.824 [2024-10-08 09:21:05.437362] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cf140, cid 6, qid 0 00:15:13.824 [2024-10-08 09:21:05.437367] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cf2c0, cid 7, qid 0 00:15:13.824 [2024-10-08 09:21:05.437831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.824 [2024-10-08 09:21:05.437847] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.824 [2024-10-08 09:21:05.437852] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437856] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=8192, cccid=5 00:15:13.824 [2024-10-08 09:21:05.437861] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7cefc0) on tqpair(0x76a750): expected_datao=0, payload_size=8192 00:15:13.824 [2024-10-08 09:21:05.437866] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437883] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437889] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.824 [2024-10-08 09:21:05.437901] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.824 [2024-10-08 09:21:05.437905] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437909] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=512, cccid=4 00:15:13.824 [2024-10-08 09:21:05.437914] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7cee40) on tqpair(0x76a750): expected_datao=0, payload_size=512 00:15:13.824 [2024-10-08 09:21:05.437918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437925] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437929] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.824 [2024-10-08 09:21:05.437940] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.824 [2024-10-08 09:21:05.437944] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=512, cccid=6 00:15:13.824 [2024-10-08 09:21:05.437952] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7cf140) on tqpair(0x76a750): expected_datao=0, payload_size=512 00:15:13.824 [2024-10-08 09:21:05.437957] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437963] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437967] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437973] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:13.824 [2024-10-08 09:21:05.437979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:13.824 [2024-10-08 09:21:05.437982] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.437986] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x76a750): datao=0, datal=4096, cccid=7 00:15:13.824 [2024-10-08 09:21:05.437990] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7cf2c0) on tqpair(0x76a750): expected_datao=0, payload_size=4096 00:15:13.824 [2024-10-08 09:21:05.437995] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.438002] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.438005] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.438011] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.824 [2024-10-08 09:21:05.438017] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.824 [2024-10-08 09:21:05.438021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.438025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cefc0) on tqpair=0x76a750 00:15:13.824 [2024-10-08 09:21:05.438042] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.824 [2024-10-08 09:21:05.438049] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.824 [2024-10-08 09:21:05.438053] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.438057] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cee40) on tqpair=0x76a750 00:15:13.824 [2024-10-08 09:21:05.438071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.824 [2024-10-08 09:21:05.438078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.824 [2024-10-08 09:21:05.438081] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.824 [2024-10-08 09:21:05.438086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cf140) on tqpair=0x76a750 00:15:13.824 ===================================================== 00:15:13.824 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.824 ===================================================== 00:15:13.824 Controller Capabilities/Features 00:15:13.824 ================================ 00:15:13.824 Vendor ID: 8086 00:15:13.824 Subsystem Vendor ID: 8086 00:15:13.824 Serial Number: SPDK00000000000001 00:15:13.824 Model Number: SPDK bdev Controller 00:15:13.824 Firmware Version: 25.01 00:15:13.824 Recommended Arb Burst: 6 00:15:13.824 IEEE OUI Identifier: e4 d2 5c 00:15:13.824 Multi-path I/O 00:15:13.824 May have multiple subsystem ports: Yes 00:15:13.824 May have multiple controllers: Yes 00:15:13.824 Associated with SR-IOV VF: No 00:15:13.824 Max Data Transfer Size: 131072 00:15:13.824 Max Number of Namespaces: 32 00:15:13.824 Max Number of I/O Queues: 127 00:15:13.824 NVMe Specification Version (VS): 1.3 00:15:13.824 NVMe Specification Version (Identify): 1.3 00:15:13.824 Maximum Queue Entries: 128 00:15:13.824 Contiguous Queues Required: Yes 00:15:13.824 Arbitration Mechanisms Supported 00:15:13.824 Weighted Round Robin: Not Supported 00:15:13.824 Vendor Specific: Not Supported 00:15:13.824 Reset Timeout: 15000 ms 00:15:13.824 Doorbell Stride: 4 bytes 00:15:13.824 NVM Subsystem Reset: Not Supported 00:15:13.824 Command Sets Supported 00:15:13.824 NVM Command Set: Supported 00:15:13.824 Boot Partition: Not Supported 00:15:13.824 Memory Page Size Minimum: 4096 bytes 00:15:13.824 Memory Page Size Maximum: 4096 bytes 00:15:13.824 Persistent Memory Region: Not Supported 00:15:13.824 Optional Asynchronous Events Supported 00:15:13.824 Namespace Attribute Notices: Supported 00:15:13.824 Firmware Activation Notices: Not Supported 00:15:13.824 ANA Change Notices: Not Supported 00:15:13.824 PLE Aggregate Log Change Notices: Not Supported 00:15:13.824 LBA Status Info Alert Notices: Not Supported 00:15:13.824 EGE Aggregate Log Change Notices: Not Supported 00:15:13.824 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.824 Zone Descriptor Change Notices: Not Supported 00:15:13.824 Discovery Log Change Notices: Not Supported 00:15:13.824 Controller Attributes 00:15:13.824 128-bit Host Identifier: Supported 00:15:13.824 Non-Operational Permissive Mode: Not Supported 00:15:13.824 NVM Sets: Not Supported 00:15:13.824 Read Recovery Levels: Not Supported 00:15:13.824 Endurance Groups: Not Supported 00:15:13.824 Predictable Latency Mode: Not Supported 00:15:13.824 Traffic Based Keep ALive: Not Supported 00:15:13.824 Namespace Granularity: Not Supported 00:15:13.824 SQ Associations: Not Supported 00:15:13.824 UUID List: Not Supported 00:15:13.824 Multi-Domain Subsystem: Not Supported 00:15:13.824 Fixed Capacity Management: Not Supported 00:15:13.824 Variable Capacity Management: Not Supported 00:15:13.825 Delete Endurance Group: Not Supported 00:15:13.825 Delete NVM Set: Not Supported 00:15:13.825 Extended LBA Formats Supported: Not Supported 00:15:13.825 Flexible Data Placement Supported: Not Supported 00:15:13.825 00:15:13.825 Controller Memory Buffer Support 00:15:13.825 ================================ 00:15:13.825 Supported: No 00:15:13.825 00:15:13.825 Persistent Memory Region Support 00:15:13.825 ================================ 00:15:13.825 Supported: No 00:15:13.825 00:15:13.825 Admin Command Set Attributes 00:15:13.825 ============================ 00:15:13.825 Security Send/Receive: Not Supported 00:15:13.825 Format NVM: Not Supported 00:15:13.825 Firmware Activate/Download: Not Supported 00:15:13.825 Namespace Management: Not Supported 00:15:13.825 Device Self-Test: Not Supported 00:15:13.825 Directives: Not Supported 00:15:13.825 NVMe-MI: Not Supported 00:15:13.825 Virtualization Management: Not Supported 00:15:13.825 Doorbell Buffer Config: Not Supported 00:15:13.825 Get LBA Status Capability: Not Supported 00:15:13.825 Command & Feature Lockdown Capability: Not Supported 00:15:13.825 Abort Command Limit: 4 00:15:13.825 Async Event Request Limit: 4 00:15:13.825 Number of Firmware Slots: N/A 00:15:13.825 Firmware Slot 1 Read-Only: N/A 00:15:13.825 Firmware Activation Without Reset: N/A 00:15:13.825 Multiple Update Detection Support: N/A 00:15:13.825 Firmware Update Granularity: No Information Provided 00:15:13.825 Per-Namespace SMART Log: No 00:15:13.825 Asymmetric Namespace Access Log Page: Not Supported 00:15:13.825 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:13.825 Command Effects Log Page: Supported 00:15:13.825 Get Log Page Extended Data: Supported 00:15:13.825 Telemetry Log Pages: Not Supported 00:15:13.825 Persistent Event Log Pages: Not Supported 00:15:13.825 Supported Log Pages Log Page: May Support 00:15:13.825 Commands Supported & Effects Log Page: Not Supported 00:15:13.825 Feature Identifiers & Effects Log Page:May Support 00:15:13.825 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.825 Data Area 4 for Telemetry Log: Not Supported 00:15:13.825 Error Log Page Entries Supported: 128 00:15:13.825 Keep Alive: Supported 00:15:13.825 Keep Alive Granularity: 10000 ms 00:15:13.825 00:15:13.825 NVM Command Set Attributes 00:15:13.825 ========================== 00:15:13.825 Submission Queue Entry Size 00:15:13.825 Max: 64 00:15:13.825 Min: 64 00:15:13.825 Completion Queue Entry Size 00:15:13.825 Max: 16 00:15:13.825 Min: 16 00:15:13.825 Number of Namespaces: 32 00:15:13.825 Compare Command: Supported 00:15:13.825 Write Uncorrectable Command: Not Supported 00:15:13.825 Dataset Management Command: Supported 00:15:13.825 Write Zeroes Command: Supported 00:15:13.825 Set Features Save Field: Not Supported 00:15:13.825 Reservations: Supported 00:15:13.825 Timestamp: Not Supported 00:15:13.825 Copy: Supported 00:15:13.825 Volatile Write Cache: Present 00:15:13.825 Atomic Write Unit (Normal): 1 00:15:13.825 Atomic Write Unit (PFail): 1 00:15:13.825 Atomic Compare & Write Unit: 1 00:15:13.825 Fused Compare & Write: Supported 00:15:13.825 Scatter-Gather List 00:15:13.825 SGL Command Set: Supported 00:15:13.825 SGL Keyed: Supported 00:15:13.825 SGL Bit Bucket Descriptor: Not Supported 00:15:13.825 SGL Metadata Pointer: Not Supported 00:15:13.825 Oversized SGL: Not Supported 00:15:13.825 SGL Metadata Address: Not Supported 00:15:13.825 SGL Offset: Supported 00:15:13.825 Transport SGL Data Block: Not Supported 00:15:13.825 Replay Protected Memory Block: Not Supported 00:15:13.825 00:15:13.825 Firmware Slot Information 00:15:13.825 ========================= 00:15:13.825 Active slot: 1 00:15:13.825 Slot 1 Firmware Revision: 25.01 00:15:13.825 00:15:13.825 00:15:13.825 Commands Supported and Effects 00:15:13.825 ============================== 00:15:13.825 Admin Commands 00:15:13.825 -------------- 00:15:13.825 Get Log Page (02h): Supported 00:15:13.825 Identify (06h): Supported 00:15:13.825 Abort (08h): Supported 00:15:13.825 Set Features (09h): Supported 00:15:13.825 Get Features (0Ah): Supported 00:15:13.825 Asynchronous Event Request (0Ch): Supported 00:15:13.825 Keep Alive (18h): Supported 00:15:13.825 I/O Commands 00:15:13.825 ------------ 00:15:13.825 Flush (00h): Supported LBA-Change 00:15:13.825 Write (01h): Supported LBA-Change 00:15:13.825 Read (02h): Supported 00:15:13.825 Compare (05h): Supported 00:15:13.825 Write Zeroes (08h): Supported LBA-Change 00:15:13.825 Dataset Management (09h): Supported LBA-Change 00:15:13.825 Copy (19h): Supported LBA-Change 00:15:13.825 00:15:13.825 Error Log 00:15:13.825 ========= 00:15:13.825 00:15:13.825 Arbitration 00:15:13.825 =========== 00:15:13.825 Arbitration Burst: 1 00:15:13.825 00:15:13.825 Power Management 00:15:13.825 ================ 00:15:13.825 Number of Power States: 1 00:15:13.825 Current Power State: Power State #0 00:15:13.825 Power State #0: 00:15:13.825 Max Power: 0.00 W 00:15:13.825 Non-Operational State: Operational 00:15:13.825 Entry Latency: Not Reported 00:15:13.825 Exit Latency: Not Reported 00:15:13.825 Relative Read Throughput: 0 00:15:13.825 Relative Read Latency: 0 00:15:13.825 Relative Write Throughput: 0 00:15:13.825 Relative Write Latency: 0 00:15:13.825 Idle Power: Not Reported 00:15:13.825 Active Power: Not Reported 00:15:13.825 Non-Operational Permissive Mode: Not Supported 00:15:13.825 00:15:13.825 Health Information 00:15:13.825 ================== 00:15:13.825 Critical Warnings: 00:15:13.825 Available Spare Space: OK 00:15:13.825 Temperature: OK 00:15:13.825 Device Reliability: OK 00:15:13.825 Read Only: No 00:15:13.825 Volatile Memory Backup: OK 00:15:13.825 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:13.825 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:13.825 Available Spare: 0% 00:15:13.825 Available Spare Threshold: 0% 00:15:13.825 Life Percentage Used:[2024-10-08 09:21:05.438093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.825 [2024-10-08 09:21:05.438100] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.825 [2024-10-08 09:21:05.438104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.825 [2024-10-08 09:21:05.438108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cf2c0) on tqpair=0x76a750 00:15:13.825 [2024-10-08 09:21:05.438213] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.825 [2024-10-08 09:21:05.438221] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x76a750) 00:15:13.825 [2024-10-08 09:21:05.438230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.825 [2024-10-08 09:21:05.438284] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cf2c0, cid 7, qid 0 00:15:13.825 [2024-10-08 09:21:05.442825] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.825 [2024-10-08 09:21:05.442846] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.825 [2024-10-08 09:21:05.442868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.825 [2024-10-08 09:21:05.442873] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cf2c0) on tqpair=0x76a750 00:15:13.825 [2024-10-08 09:21:05.442918] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:13.825 [2024-10-08 09:21:05.442932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce840) on tqpair=0x76a750 00:15:13.825 [2024-10-08 09:21:05.442939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.825 [2024-10-08 09:21:05.442947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ce9c0) on tqpair=0x76a750 00:15:13.825 [2024-10-08 09:21:05.442952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.825 [2024-10-08 09:21:05.442957] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ceb40) on tqpair=0x76a750 00:15:13.825 [2024-10-08 09:21:05.442962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.825 [2024-10-08 09:21:05.442968] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.825 [2024-10-08 09:21:05.442973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.825 [2024-10-08 09:21:05.442983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.825 [2024-10-08 09:21:05.442988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.825 [2024-10-08 09:21:05.442992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.825 [2024-10-08 09:21:05.443001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.825 [2024-10-08 09:21:05.443030] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.443081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.443089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.443093] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443097] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.443106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.443122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.443146] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.443499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.443515] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.443520] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443524] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.443530] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:13.826 [2024-10-08 09:21:05.443535] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:13.826 [2024-10-08 09:21:05.443547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443556] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.443564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.443584] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.443864] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.443879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.443884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.443901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443906] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.443910] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.443918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.443939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.444177] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.444191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.444196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444201] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.444212] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444221] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.444229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.444249] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.444497] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.444511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.444516] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444520] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.444531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444537] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444540] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.444548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.444567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.444828] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.444843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.444848] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444852] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.444864] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444869] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.444873] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.444881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.444901] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.445173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.445187] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.445192] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445196] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.445208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445213] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445217] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.445224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.445244] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.445489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.445500] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.445505] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445509] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.445520] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445525] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445529] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.445537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.445555] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.445774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.445786] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.445791] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445795] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.445806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.445815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.445823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.445843] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.446091] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.446103] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.446107] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.446111] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.446123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.446128] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.446132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.446139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.446158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.446429] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.446444] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.446449] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.446453] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.446465] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.446470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.446475] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.446483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.446504] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.450824] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.450844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.826 [2024-10-08 09:21:05.450865] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.450870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.826 [2024-10-08 09:21:05.450884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.450890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:13.826 [2024-10-08 09:21:05.450893] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x76a750) 00:15:13.826 [2024-10-08 09:21:05.450902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.826 [2024-10-08 09:21:05.450927] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7cecc0, cid 3, qid 0 00:15:13.826 [2024-10-08 09:21:05.450983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:13.826 [2024-10-08 09:21:05.450990] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:13.827 [2024-10-08 09:21:05.450993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:13.827 [2024-10-08 09:21:05.450997] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7cecc0) on tqpair=0x76a750 00:15:13.827 [2024-10-08 09:21:05.451006] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:13.827 0% 00:15:13.827 Data Units Read: 0 00:15:13.827 Data Units Written: 0 00:15:13.827 Host Read Commands: 0 00:15:13.827 Host Write Commands: 0 00:15:13.827 Controller Busy Time: 0 minutes 00:15:13.827 Power Cycles: 0 00:15:13.827 Power On Hours: 0 hours 00:15:13.827 Unsafe Shutdowns: 0 00:15:13.827 Unrecoverable Media Errors: 0 00:15:13.827 Lifetime Error Log Entries: 0 00:15:13.827 Warning Temperature Time: 0 minutes 00:15:13.827 Critical Temperature Time: 0 minutes 00:15:13.827 00:15:13.827 Number of Queues 00:15:13.827 ================ 00:15:13.827 Number of I/O Submission Queues: 127 00:15:13.827 Number of I/O Completion Queues: 127 00:15:13.827 00:15:13.827 Active Namespaces 00:15:13.827 ================= 00:15:13.827 Namespace ID:1 00:15:13.827 Error Recovery Timeout: Unlimited 00:15:13.827 Command Set Identifier: NVM (00h) 00:15:13.827 Deallocate: Supported 00:15:13.827 Deallocated/Unwritten Error: Not Supported 00:15:13.827 Deallocated Read Value: Unknown 00:15:13.827 Deallocate in Write Zeroes: Not Supported 00:15:13.827 Deallocated Guard Field: 0xFFFF 00:15:13.827 Flush: Supported 00:15:13.827 Reservation: Supported 00:15:13.827 Namespace Sharing Capabilities: Multiple Controllers 00:15:13.827 Size (in LBAs): 131072 (0GiB) 00:15:13.827 Capacity (in LBAs): 131072 (0GiB) 00:15:13.827 Utilization (in LBAs): 131072 (0GiB) 00:15:13.827 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:13.827 EUI64: ABCDEF0123456789 00:15:13.827 UUID: 97d225e5-b1cb-4652-8b54-74607c7bd913 00:15:13.827 Thin Provisioning: Not Supported 00:15:13.827 Per-NS Atomic Units: Yes 00:15:13.827 Atomic Boundary Size (Normal): 0 00:15:13.827 Atomic Boundary Size (PFail): 0 00:15:13.827 Atomic Boundary Offset: 0 00:15:13.827 Maximum Single Source Range Length: 65535 00:15:13.827 Maximum Copy Length: 65535 00:15:13.827 Maximum Source Range Count: 1 00:15:13.827 NGUID/EUI64 Never Reused: No 00:15:13.827 Namespace Write Protected: No 00:15:13.827 Number of LBA Formats: 1 00:15:13.827 Current LBA Format: LBA Format #00 00:15:13.827 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:13.827 00:15:13.827 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:14.086 rmmod nvme_tcp 00:15:14.086 rmmod nvme_fabrics 00:15:14.086 rmmod nvme_keyring 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 74347 ']' 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 74347 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74347 ']' 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74347 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74347 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.086 killing process with pid 74347 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74347' 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74347 00:15:14.086 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74347 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:14.345 09:21:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:14.345 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:14.346 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:14.605 00:15:14.605 real 0m2.944s 00:15:14.605 user 0m7.457s 00:15:14.605 sys 0m0.787s 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:14.605 ************************************ 00:15:14.605 END TEST nvmf_identify 00:15:14.605 ************************************ 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.605 ************************************ 00:15:14.605 START TEST nvmf_perf 00:15:14.605 ************************************ 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:14.605 * Looking for test storage... 00:15:14.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.605 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.864 --rc genhtml_branch_coverage=1 00:15:14.864 --rc genhtml_function_coverage=1 00:15:14.864 --rc genhtml_legend=1 00:15:14.864 --rc geninfo_all_blocks=1 00:15:14.864 --rc geninfo_unexecuted_blocks=1 00:15:14.864 00:15:14.864 ' 00:15:14.864 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.864 --rc genhtml_branch_coverage=1 00:15:14.864 --rc genhtml_function_coverage=1 00:15:14.865 --rc genhtml_legend=1 00:15:14.865 --rc geninfo_all_blocks=1 00:15:14.865 --rc geninfo_unexecuted_blocks=1 00:15:14.865 00:15:14.865 ' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.865 --rc genhtml_branch_coverage=1 00:15:14.865 --rc genhtml_function_coverage=1 00:15:14.865 --rc genhtml_legend=1 00:15:14.865 --rc geninfo_all_blocks=1 00:15:14.865 --rc geninfo_unexecuted_blocks=1 00:15:14.865 00:15:14.865 ' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.865 --rc genhtml_branch_coverage=1 00:15:14.865 --rc genhtml_function_coverage=1 00:15:14.865 --rc genhtml_legend=1 00:15:14.865 --rc geninfo_all_blocks=1 00:15:14.865 --rc geninfo_unexecuted_blocks=1 00:15:14.865 00:15:14.865 ' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.865 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:14.865 Cannot find device "nvmf_init_br" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:14.865 Cannot find device "nvmf_init_br2" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:14.865 Cannot find device "nvmf_tgt_br" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.865 Cannot find device "nvmf_tgt_br2" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:14.865 Cannot find device "nvmf_init_br" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:14.865 Cannot find device "nvmf_init_br2" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:14.865 Cannot find device "nvmf_tgt_br" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:14.865 Cannot find device "nvmf_tgt_br2" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:14.865 Cannot find device "nvmf_br" 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:14.865 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:14.865 Cannot find device "nvmf_init_if" 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:14.866 Cannot find device "nvmf_init_if2" 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:14.866 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:15.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:15.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:15.125 00:15:15.125 --- 10.0.0.3 ping statistics --- 00:15:15.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.125 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:15.125 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:15.125 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:15:15.125 00:15:15.125 --- 10.0.0.4 ping statistics --- 00:15:15.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.125 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:15.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:15.125 00:15:15.125 --- 10.0.0.1 ping statistics --- 00:15:15.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.125 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:15.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:15.125 00:15:15.125 --- 10.0.0.2 ping statistics --- 00:15:15.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.125 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # return 0 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=74611 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 74611 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74611 ']' 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.125 09:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:15.385 [2024-10-08 09:21:06.838331] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:15.385 [2024-10-08 09:21:06.838418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.385 [2024-10-08 09:21:06.976573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.385 [2024-10-08 09:21:07.060289] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.385 [2024-10-08 09:21:07.060359] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.385 [2024-10-08 09:21:07.060369] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.385 [2024-10-08 09:21:07.060377] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.385 [2024-10-08 09:21:07.060383] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.385 [2024-10-08 09:21:07.061629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.385 [2024-10-08 09:21:07.061706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.385 [2024-10-08 09:21:07.061806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.385 [2024-10-08 09:21:07.061807] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.644 [2024-10-08 09:21:07.119330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:15.644 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:16.212 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:16.212 09:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:16.471 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:16.471 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:16.730 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:16.730 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:16.730 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:16.730 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:16.730 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:16.988 [2024-10-08 09:21:08.658013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.246 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.504 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:17.504 09:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.762 09:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:17.762 09:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:18.021 09:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:18.280 [2024-10-08 09:21:09.779496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:18.280 09:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:18.538 09:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:18.538 09:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:18.538 09:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:18.538 09:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:19.475 Initializing NVMe Controllers 00:15:19.475 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:19.475 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:19.475 Initialization complete. Launching workers. 00:15:19.475 ======================================================== 00:15:19.475 Latency(us) 00:15:19.475 Device Information : IOPS MiB/s Average min max 00:15:19.475 PCIE (0000:00:10.0) NSID 1 from core 0: 22786.26 89.01 1404.01 369.73 8197.69 00:15:19.475 ======================================================== 00:15:19.475 Total : 22786.26 89.01 1404.01 369.73 8197.69 00:15:19.475 00:15:19.475 09:21:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:20.850 Initializing NVMe Controllers 00:15:20.850 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:20.850 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:20.850 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:20.850 Initialization complete. Launching workers. 00:15:20.850 ======================================================== 00:15:20.850 Latency(us) 00:15:20.850 Device Information : IOPS MiB/s Average min max 00:15:20.850 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3919.00 15.31 253.80 95.16 6031.11 00:15:20.850 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.00 0.49 7982.88 6093.71 12004.42 00:15:20.850 ======================================================== 00:15:20.850 Total : 4045.00 15.80 494.55 95.16 12004.42 00:15:20.850 00:15:20.850 09:21:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:22.227 Initializing NVMe Controllers 00:15:22.227 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:22.227 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:22.227 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:22.227 Initialization complete. Launching workers. 00:15:22.227 ======================================================== 00:15:22.227 Latency(us) 00:15:22.227 Device Information : IOPS MiB/s Average min max 00:15:22.227 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9011.69 35.20 3550.89 545.80 7575.68 00:15:22.227 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3972.02 15.52 8085.05 5815.35 15677.68 00:15:22.227 ======================================================== 00:15:22.227 Total : 12983.71 50.72 4938.00 545.80 15677.68 00:15:22.227 00:15:22.227 09:21:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:22.227 09:21:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:24.762 Initializing NVMe Controllers 00:15:24.762 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.762 Controller IO queue size 128, less than required. 00:15:24.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.762 Controller IO queue size 128, less than required. 00:15:24.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:24.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:24.762 Initialization complete. Launching workers. 00:15:24.762 ======================================================== 00:15:24.762 Latency(us) 00:15:24.762 Device Information : IOPS MiB/s Average min max 00:15:24.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1854.25 463.56 69391.40 39210.51 117594.36 00:15:24.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 691.66 172.92 194331.34 65259.63 295648.37 00:15:24.762 ======================================================== 00:15:24.762 Total : 2545.91 636.48 103334.48 39210.51 295648.37 00:15:24.762 00:15:25.021 09:21:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:25.280 Initializing NVMe Controllers 00:15:25.280 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.280 Controller IO queue size 128, less than required. 00:15:25.280 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:25.280 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:25.280 Controller IO queue size 128, less than required. 00:15:25.280 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:25.280 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:25.280 WARNING: Some requested NVMe devices were skipped 00:15:25.280 No valid NVMe controllers or AIO or URING devices found 00:15:25.280 09:21:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:27.817 Initializing NVMe Controllers 00:15:27.817 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.817 Controller IO queue size 128, less than required. 00:15:27.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:27.817 Controller IO queue size 128, less than required. 00:15:27.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:27.817 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:27.817 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:27.817 Initialization complete. Launching workers. 00:15:27.817 00:15:27.817 ==================== 00:15:27.817 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:27.817 TCP transport: 00:15:27.817 polls: 10576 00:15:27.817 idle_polls: 7159 00:15:27.817 sock_completions: 3417 00:15:27.817 nvme_completions: 5363 00:15:27.817 submitted_requests: 8012 00:15:27.817 queued_requests: 1 00:15:27.817 00:15:27.817 ==================== 00:15:27.817 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:27.817 TCP transport: 00:15:27.817 polls: 10756 00:15:27.817 idle_polls: 6477 00:15:27.817 sock_completions: 4279 00:15:27.817 nvme_completions: 6305 00:15:27.817 submitted_requests: 9426 00:15:27.817 queued_requests: 1 00:15:27.817 ======================================================== 00:15:27.817 Latency(us) 00:15:27.817 Device Information : IOPS MiB/s Average min max 00:15:27.817 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1339.19 334.80 98208.84 54100.97 167979.05 00:15:27.817 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1574.46 393.62 81767.62 38884.20 117716.23 00:15:27.817 ======================================================== 00:15:27.817 Total : 2913.65 728.41 89324.44 38884.20 167979.05 00:15:27.817 00:15:27.817 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:27.817 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:28.076 rmmod nvme_tcp 00:15:28.076 rmmod nvme_fabrics 00:15:28.076 rmmod nvme_keyring 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 74611 ']' 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 74611 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74611 ']' 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74611 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74611 00:15:28.076 killing process with pid 74611 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74611' 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74611 00:15:28.076 09:21:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74611 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:29.024 00:15:29.024 real 0m14.494s 00:15:29.024 user 0m52.432s 00:15:29.024 sys 0m4.072s 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:29.024 ************************************ 00:15:29.024 END TEST nvmf_perf 00:15:29.024 ************************************ 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.024 ************************************ 00:15:29.024 START TEST nvmf_fio_host 00:15:29.024 ************************************ 00:15:29.024 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:29.284 * Looking for test storage... 00:15:29.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.284 --rc genhtml_branch_coverage=1 00:15:29.284 --rc genhtml_function_coverage=1 00:15:29.284 --rc genhtml_legend=1 00:15:29.284 --rc geninfo_all_blocks=1 00:15:29.284 --rc geninfo_unexecuted_blocks=1 00:15:29.284 00:15:29.284 ' 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.284 --rc genhtml_branch_coverage=1 00:15:29.284 --rc genhtml_function_coverage=1 00:15:29.284 --rc genhtml_legend=1 00:15:29.284 --rc geninfo_all_blocks=1 00:15:29.284 --rc geninfo_unexecuted_blocks=1 00:15:29.284 00:15:29.284 ' 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.284 --rc genhtml_branch_coverage=1 00:15:29.284 --rc genhtml_function_coverage=1 00:15:29.284 --rc genhtml_legend=1 00:15:29.284 --rc geninfo_all_blocks=1 00:15:29.284 --rc geninfo_unexecuted_blocks=1 00:15:29.284 00:15:29.284 ' 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.284 --rc genhtml_branch_coverage=1 00:15:29.284 --rc genhtml_function_coverage=1 00:15:29.284 --rc genhtml_legend=1 00:15:29.284 --rc geninfo_all_blocks=1 00:15:29.284 --rc geninfo_unexecuted_blocks=1 00:15:29.284 00:15:29.284 ' 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.284 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.285 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:29.285 Cannot find device "nvmf_init_br" 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:29.285 Cannot find device "nvmf_init_br2" 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:29.285 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:29.544 Cannot find device "nvmf_tgt_br" 00:15:29.544 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:29.544 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.544 Cannot find device "nvmf_tgt_br2" 00:15:29.544 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:29.544 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:29.544 Cannot find device "nvmf_init_br" 00:15:29.544 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:29.544 09:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:29.544 Cannot find device "nvmf_init_br2" 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:29.544 Cannot find device "nvmf_tgt_br" 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:29.544 Cannot find device "nvmf_tgt_br2" 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:29.544 Cannot find device "nvmf_br" 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:29.544 Cannot find device "nvmf_init_if" 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:29.544 Cannot find device "nvmf_init_if2" 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.544 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:29.803 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.803 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:15:29.803 00:15:29.803 --- 10.0.0.3 ping statistics --- 00:15:29.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.803 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:29.803 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:29.803 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:15:29.803 00:15:29.803 --- 10.0.0.4 ping statistics --- 00:15:29.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.803 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:29.803 00:15:29.803 --- 10.0.0.1 ping statistics --- 00:15:29.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.803 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:29.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:29.803 00:15:29.803 --- 10.0.0.2 ping statistics --- 00:15:29.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.803 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # return 0 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75062 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75062 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 75062 ']' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.803 09:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.803 [2024-10-08 09:21:21.417488] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:29.803 [2024-10-08 09:21:21.417729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.063 [2024-10-08 09:21:21.559609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.063 [2024-10-08 09:21:21.660073] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.063 [2024-10-08 09:21:21.660344] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.063 [2024-10-08 09:21:21.660512] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.063 [2024-10-08 09:21:21.660661] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.063 [2024-10-08 09:21:21.660703] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.063 [2024-10-08 09:21:21.662217] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.063 [2024-10-08 09:21:21.662458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.063 [2024-10-08 09:21:21.662467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.063 [2024-10-08 09:21:21.662318] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.063 [2024-10-08 09:21:21.721249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.999 09:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.999 09:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:15:30.999 09:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:31.258 [2024-10-08 09:21:22.726595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.258 09:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:31.258 09:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.258 09:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.258 09:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:31.517 Malloc1 00:15:31.517 09:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:31.775 09:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:32.035 09:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:32.293 [2024-10-08 09:21:23.861785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.293 09:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:32.552 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:32.552 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:32.552 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:32.552 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:32.553 09:21:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:32.812 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:32.812 fio-3.35 00:15:32.812 Starting 1 thread 00:15:35.346 00:15:35.346 test: (groupid=0, jobs=1): err= 0: pid=75145: Tue Oct 8 09:21:26 2024 00:15:35.346 read: IOPS=9433, BW=36.8MiB/s (38.6MB/s)(73.9MiB/2006msec) 00:15:35.346 slat (nsec): min=1967, max=347942, avg=2746.45, stdev=3662.05 00:15:35.346 clat (usec): min=2822, max=12284, avg=7056.06, stdev=583.95 00:15:35.346 lat (usec): min=2861, max=12286, avg=7058.81, stdev=583.89 00:15:35.346 clat percentiles (usec): 00:15:35.346 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:15:35.346 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:35.346 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8029], 00:15:35.346 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[11207], 99.95th=[11600], 00:15:35.346 | 99.99th=[12256] 00:15:35.346 bw ( KiB/s): min=36120, max=38544, per=99.95%, avg=37716.00, stdev=1124.38, samples=4 00:15:35.346 iops : min= 9030, max= 9636, avg=9429.00, stdev=281.10, samples=4 00:15:35.346 write: IOPS=9434, BW=36.9MiB/s (38.6MB/s)(73.9MiB/2006msec); 0 zone resets 00:15:35.346 slat (usec): min=2, max=249, avg= 2.92, stdev= 2.73 00:15:35.346 clat (usec): min=2666, max=11986, avg=6466.79, stdev=528.74 00:15:35.346 lat (usec): min=2680, max=11989, avg=6469.71, stdev=528.83 00:15:35.346 clat percentiles (usec): 00:15:35.346 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:15:35.346 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:15:35.346 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7111], 95.00th=[ 7373], 00:15:35.346 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9896], 99.95th=[10814], 00:15:35.346 | 99.99th=[11469] 00:15:35.346 bw ( KiB/s): min=37064, max=38464, per=99.98%, avg=37730.00, stdev=586.18, samples=4 00:15:35.346 iops : min= 9266, max= 9616, avg=9432.50, stdev=146.55, samples=4 00:15:35.346 lat (msec) : 4=0.06%, 10=99.82%, 20=0.12% 00:15:35.346 cpu : usr=66.93%, sys=24.64%, ctx=8, majf=0, minf=6 00:15:35.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:35.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.346 issued rwts: total=18924,18926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.346 00:15:35.346 Run status group 0 (all jobs): 00:15:35.346 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.9MiB (77.5MB), run=2006-2006msec 00:15:35.346 WRITE: bw=36.9MiB/s (38.6MB/s), 36.9MiB/s-36.9MiB/s (38.6MB/s-38.6MB/s), io=73.9MiB (77.5MB), run=2006-2006msec 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:35.346 09:21:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:35.346 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:35.346 fio-3.35 00:15:35.346 Starting 1 thread 00:15:37.895 00:15:37.895 test: (groupid=0, jobs=1): err= 0: pid=75188: Tue Oct 8 09:21:29 2024 00:15:37.895 read: IOPS=8139, BW=127MiB/s (133MB/s)(255MiB/2007msec) 00:15:37.895 slat (usec): min=2, max=131, avg= 3.49, stdev= 2.57 00:15:37.895 clat (usec): min=1505, max=18509, avg=8887.12, stdev=2389.80 00:15:37.895 lat (usec): min=1508, max=18513, avg=8890.61, stdev=2389.86 00:15:37.895 clat percentiles (usec): 00:15:37.895 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5866], 20.00th=[ 6915], 00:15:37.895 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:15:37.895 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11863], 95.00th=[13042], 00:15:37.895 | 99.00th=[15533], 99.50th=[16057], 99.90th=[17171], 99.95th=[17433], 00:15:37.895 | 99.99th=[18482] 00:15:37.895 bw ( KiB/s): min=59488, max=78560, per=52.06%, avg=67800.00, stdev=9119.81, samples=4 00:15:37.895 iops : min= 3718, max= 4910, avg=4237.50, stdev=569.99, samples=4 00:15:37.895 write: IOPS=4894, BW=76.5MiB/s (80.2MB/s)(139MiB/1816msec); 0 zone resets 00:15:37.895 slat (usec): min=30, max=370, avg=35.43, stdev= 9.92 00:15:37.895 clat (usec): min=5463, max=20762, avg=11804.76, stdev=2287.55 00:15:37.895 lat (usec): min=5495, max=20794, avg=11840.18, stdev=2288.01 00:15:37.895 clat percentiles (usec): 00:15:37.895 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:15:37.895 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[12256], 00:15:37.895 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14877], 95.00th=[16188], 00:15:37.895 | 99.00th=[17695], 99.50th=[18220], 99.90th=[20055], 99.95th=[20579], 00:15:37.895 | 99.99th=[20841] 00:15:37.895 bw ( KiB/s): min=61760, max=81600, per=90.08%, avg=70544.00, stdev=9549.72, samples=4 00:15:37.895 iops : min= 3858, max= 5100, avg=4409.00, stdev=597.81, samples=4 00:15:37.895 lat (msec) : 2=0.03%, 4=0.38%, 10=51.86%, 20=47.69%, 50=0.05% 00:15:37.895 cpu : usr=82.10%, sys=14.26%, ctx=4, majf=0, minf=13 00:15:37.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:37.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.895 issued rwts: total=16336,8888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.895 00:15:37.895 Run status group 0 (all jobs): 00:15:37.895 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2007-2007msec 00:15:37.895 WRITE: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=139MiB (146MB), run=1816-1816msec 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:37.895 rmmod nvme_tcp 00:15:37.895 rmmod nvme_fabrics 00:15:37.895 rmmod nvme_keyring 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 75062 ']' 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 75062 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 75062 ']' 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 75062 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.895 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75062 00:15:38.154 killing process with pid 75062 00:15:38.154 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:38.154 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:38.154 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75062' 00:15:38.154 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 75062 00:15:38.154 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 75062 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:38.413 09:21:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:38.413 ************************************ 00:15:38.413 END TEST nvmf_fio_host 00:15:38.413 ************************************ 00:15:38.413 00:15:38.413 real 0m9.389s 00:15:38.413 user 0m37.275s 00:15:38.413 sys 0m2.407s 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.413 09:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.672 ************************************ 00:15:38.672 START TEST nvmf_failover 00:15:38.672 ************************************ 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:38.672 * Looking for test storage... 00:15:38.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.672 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:38.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.673 --rc genhtml_branch_coverage=1 00:15:38.673 --rc genhtml_function_coverage=1 00:15:38.673 --rc genhtml_legend=1 00:15:38.673 --rc geninfo_all_blocks=1 00:15:38.673 --rc geninfo_unexecuted_blocks=1 00:15:38.673 00:15:38.673 ' 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:38.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.673 --rc genhtml_branch_coverage=1 00:15:38.673 --rc genhtml_function_coverage=1 00:15:38.673 --rc genhtml_legend=1 00:15:38.673 --rc geninfo_all_blocks=1 00:15:38.673 --rc geninfo_unexecuted_blocks=1 00:15:38.673 00:15:38.673 ' 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:38.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.673 --rc genhtml_branch_coverage=1 00:15:38.673 --rc genhtml_function_coverage=1 00:15:38.673 --rc genhtml_legend=1 00:15:38.673 --rc geninfo_all_blocks=1 00:15:38.673 --rc geninfo_unexecuted_blocks=1 00:15:38.673 00:15:38.673 ' 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:38.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.673 --rc genhtml_branch_coverage=1 00:15:38.673 --rc genhtml_function_coverage=1 00:15:38.673 --rc genhtml_legend=1 00:15:38.673 --rc geninfo_all_blocks=1 00:15:38.673 --rc geninfo_unexecuted_blocks=1 00:15:38.673 00:15:38.673 ' 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.673 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.932 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.933 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:38.933 Cannot find device "nvmf_init_br" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:38.933 Cannot find device "nvmf_init_br2" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:38.933 Cannot find device "nvmf_tgt_br" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.933 Cannot find device "nvmf_tgt_br2" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:38.933 Cannot find device "nvmf_init_br" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:38.933 Cannot find device "nvmf_init_br2" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:38.933 Cannot find device "nvmf_tgt_br" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:38.933 Cannot find device "nvmf_tgt_br2" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:38.933 Cannot find device "nvmf_br" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:38.933 Cannot find device "nvmf_init_if" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:38.933 Cannot find device "nvmf_init_if2" 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:38.933 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:39.192 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:39.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:15:39.193 00:15:39.193 --- 10.0.0.3 ping statistics --- 00:15:39.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.193 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:39.193 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:39.193 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:39.193 00:15:39.193 --- 10.0.0.4 ping statistics --- 00:15:39.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.193 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:39.193 00:15:39.193 --- 10.0.0.1 ping statistics --- 00:15:39.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.193 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:39.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:39.193 00:15:39.193 --- 10.0.0.2 ping statistics --- 00:15:39.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.193 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # return 0 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=75458 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 75458 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75458 ']' 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.193 09:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.193 [2024-10-08 09:21:30.839913] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:39.193 [2024-10-08 09:21:30.840206] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.452 [2024-10-08 09:21:30.983577] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.710 [2024-10-08 09:21:31.151622] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.710 [2024-10-08 09:21:31.151721] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.710 [2024-10-08 09:21:31.151758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.710 [2024-10-08 09:21:31.151773] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.710 [2024-10-08 09:21:31.151785] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.710 [2024-10-08 09:21:31.152527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.710 [2024-10-08 09:21:31.152680] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.711 [2024-10-08 09:21:31.152695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.711 [2024-10-08 09:21:31.234212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.278 09:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.278 09:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:40.278 09:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:40.278 09:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.278 09:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:40.278 09:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.278 09:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:40.537 [2024-10-08 09:21:32.136606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.537 09:21:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:40.795 Malloc0 00:15:40.795 09:21:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:41.054 09:21:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.313 09:21:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:41.572 [2024-10-08 09:21:33.122964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:41.572 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:41.830 [2024-10-08 09:21:33.351063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:41.831 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:42.089 [2024-10-08 09:21:33.587276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:42.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75517 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75517 /var/tmp/bdevperf.sock 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75517 ']' 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.089 09:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:43.025 09:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.025 09:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:43.025 09:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:43.284 NVMe0n1 00:15:43.284 09:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:43.543 00:15:43.543 09:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75535 00:15:43.543 09:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:43.543 09:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:44.941 09:21:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:44.941 09:21:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:48.230 09:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:48.230 00:15:48.230 09:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:48.489 09:21:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:51.776 09:21:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:51.776 [2024-10-08 09:21:43.381018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:51.776 09:21:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:53.153 09:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:53.153 09:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75535 00:15:59.728 { 00:15:59.728 "results": [ 00:15:59.728 { 00:15:59.728 "job": "NVMe0n1", 00:15:59.728 "core_mask": "0x1", 00:15:59.728 "workload": "verify", 00:15:59.728 "status": "finished", 00:15:59.728 "verify_range": { 00:15:59.728 "start": 0, 00:15:59.728 "length": 16384 00:15:59.728 }, 00:15:59.728 "queue_depth": 128, 00:15:59.728 "io_size": 4096, 00:15:59.728 "runtime": 15.01101, 00:15:59.728 "iops": 9343.275369212331, 00:15:59.728 "mibps": 36.49716941098567, 00:15:59.728 "io_failed": 3341, 00:15:59.728 "io_timeout": 0, 00:15:59.728 "avg_latency_us": 13351.634725369622, 00:15:59.728 "min_latency_us": 599.5054545454545, 00:15:59.728 "max_latency_us": 16086.10909090909 00:15:59.728 } 00:15:59.728 ], 00:15:59.728 "core_count": 1 00:15:59.728 } 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75517 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75517 ']' 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75517 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75517 00:15:59.728 killing process with pid 75517 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75517' 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75517 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75517 00:15:59.728 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:59.728 [2024-10-08 09:21:33.656257] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:59.728 [2024-10-08 09:21:33.656363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75517 ] 00:15:59.728 [2024-10-08 09:21:33.811766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.728 [2024-10-08 09:21:33.932154] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.728 [2024-10-08 09:21:33.990944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.728 Running I/O for 15 seconds... 00:15:59.728 7317.00 IOPS, 28.58 MiB/s [2024-10-08T09:21:51.411Z] [2024-10-08 09:21:36.487179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.728 [2024-10-08 09:21:36.487257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.728 [2024-10-08 09:21:36.487302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.728 [2024-10-08 09:21:36.487319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.728 [2024-10-08 09:21:36.487335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.728 [2024-10-08 09:21:36.487349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.728 [2024-10-08 09:21:36.487365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.728 [2024-10-08 09:21:36.487378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.728 [2024-10-08 09:21:36.487393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.728 [2024-10-08 09:21:36.487406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.728 [2024-10-08 09:21:36.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.728 [2024-10-08 09:21:36.487434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.487983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.487997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.729 [2024-10-08 09:21:36.488719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.729 [2024-10-08 09:21:36.488734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.729 [2024-10-08 09:21:36.488756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.488806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.488833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.488855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.488870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.488885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.488900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.488915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.488929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.488945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.488959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.488975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.488989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.730 [2024-10-08 09:21:36.489289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.730 [2024-10-08 09:21:36.489317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.730 [2024-10-08 09:21:36.489566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.489980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.489993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.490008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.490021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.490043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.490057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.490072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.490086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.730 [2024-10-08 09:21:36.490101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.730 [2024-10-08 09:21:36.490114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.490982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.490996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.731 [2024-10-08 09:21:36.491379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.731 [2024-10-08 09:21:36.491394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ab660 is same with the state(6) to be set 00:15:59.731 [2024-10-08 09:21:36.491417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.731 [2024-10-08 09:21:36.491428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.731 [2024-10-08 09:21:36.491438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67344 len:8 PRP1 0x0 PRP2 0x0 00:15:59.731 [2024-10-08 09:21:36.491456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:36.491513] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ab660 was disconnected and freed. reset controller. 00:15:59.732 [2024-10-08 09:21:36.491531] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:59.732 [2024-10-08 09:21:36.491583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.732 [2024-10-08 09:21:36.491604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:36.491619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.732 [2024-10-08 09:21:36.491633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:36.491647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.732 [2024-10-08 09:21:36.491660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:36.491674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.732 [2024-10-08 09:21:36.491687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:36.491701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:59.732 [2024-10-08 09:21:36.495406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:59.732 [2024-10-08 09:21:36.495443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d2e0 (9): Bad file descriptor 00:15:59.732 [2024-10-08 09:21:36.534737] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:59.732 8132.00 IOPS, 31.77 MiB/s [2024-10-08T09:21:51.415Z] 8664.00 IOPS, 33.84 MiB/s [2024-10-08T09:21:51.415Z] 8883.00 IOPS, 34.70 MiB/s [2024-10-08T09:21:51.415Z] [2024-10-08 09:21:40.104551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.104968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.104985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.732 [2024-10-08 09:21:40.105310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.732 [2024-10-08 09:21:40.105561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.732 [2024-10-08 09:21:40.105576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.105918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.105956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.105972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.105986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.733 [2024-10-08 09:21:40.106845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.106969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.733 [2024-10-08 09:21:40.106986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.733 [2024-10-08 09:21:40.107001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.734 [2024-10-08 09:21:40.107728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.107971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.107987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.734 [2024-10-08 09:21:40.108233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18af510 is same with the state(6) to be set 00:15:59.734 [2024-10-08 09:21:40.108266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.734 [2024-10-08 09:21:40.108276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.734 [2024-10-08 09:21:40.108287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:15:59.734 [2024-10-08 09:21:40.108300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.734 [2024-10-08 09:21:40.108325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.734 [2024-10-08 09:21:40.108350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88944 len:8 PRP1 0x0 PRP2 0x0 00:15:59.734 [2024-10-08 09:21:40.108363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.734 [2024-10-08 09:21:40.108386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.734 [2024-10-08 09:21:40.108396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88952 len:8 PRP1 0x0 PRP2 0x0 00:15:59.734 [2024-10-08 09:21:40.108409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.734 [2024-10-08 09:21:40.108422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.734 [2024-10-08 09:21:40.108432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88960 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88968 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88976 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88984 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88992 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89000 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89008 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89016 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89024 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.108944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.108955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.108974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89032 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.108988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89040 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.735 [2024-10-08 09:21:40.109446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.735 [2024-10-08 09:21:40.109456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:15:59.735 [2024-10-08 09:21:40.109485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109561] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18af510 was disconnected and freed. reset controller. 00:15:59.735 [2024-10-08 09:21:40.109579] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:59.735 [2024-10-08 09:21:40.109636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.735 [2024-10-08 09:21:40.109659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.735 [2024-10-08 09:21:40.109689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.735 [2024-10-08 09:21:40.109718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.735 [2024-10-08 09:21:40.109747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:40.109761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:59.735 [2024-10-08 09:21:40.109824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d2e0 (9): Bad file descriptor 00:15:59.735 [2024-10-08 09:21:40.113710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:59.735 [2024-10-08 09:21:40.146934] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:59.735 8925.40 IOPS, 34.86 MiB/s [2024-10-08T09:21:51.418Z] 9053.83 IOPS, 35.37 MiB/s [2024-10-08T09:21:51.418Z] 9165.00 IOPS, 35.80 MiB/s [2024-10-08T09:21:51.418Z] 9233.38 IOPS, 36.07 MiB/s [2024-10-08T09:21:51.418Z] 9282.11 IOPS, 36.26 MiB/s [2024-10-08T09:21:51.418Z] [2024-10-08 09:21:44.646038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.735 [2024-10-08 09:21:44.646113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:44.646149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.735 [2024-10-08 09:21:44.646163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.735 [2024-10-08 09:21:44.646176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.735 [2024-10-08 09:21:44.646189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.646236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:59.736 [2024-10-08 09:21:44.646277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.646309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183d2e0 is same with the state(6) to be set 00:15:59.736 [2024-10-08 09:21:44.646992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.736 [2024-10-08 09:21:44.647701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.647971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.647985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.648013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.648068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.648124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.648168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.648196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.736 [2024-10-08 09:21:44.648223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.736 [2024-10-08 09:21:44.648237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.648670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.648981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.648994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.649021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.649050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.649077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.649105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.737 [2024-10-08 09:21:44.649132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.649159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.649186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.649221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.649248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.737 [2024-10-08 09:21:44.649263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.737 [2024-10-08 09:21:44.649275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.649932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.649960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.649976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.649989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.650053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.650083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.650110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.650138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:59.738 [2024-10-08 09:21:44.650166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.650194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.650221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.650276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.650325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.650355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.650385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.738 [2024-10-08 09:21:44.650420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bd410 is same with the state(6) to be set 00:15:59.738 [2024-10-08 09:21:44.650462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.738 [2024-10-08 09:21:44.650480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.738 [2024-10-08 09:21:44.650493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53120 len:8 PRP1 0x0 PRP2 0x0 00:15:59.738 [2024-10-08 09:21:44.650507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.738 [2024-10-08 09:21:44.650533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.738 [2024-10-08 09:21:44.650544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53576 len:8 PRP1 0x0 PRP2 0x0 00:15:59.738 [2024-10-08 09:21:44.650557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.738 [2024-10-08 09:21:44.650598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.738 [2024-10-08 09:21:44.650609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53584 len:8 PRP1 0x0 PRP2 0x0 00:15:59.738 [2024-10-08 09:21:44.650622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.738 [2024-10-08 09:21:44.650646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.738 [2024-10-08 09:21:44.650657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53592 len:8 PRP1 0x0 PRP2 0x0 00:15:59.738 [2024-10-08 09:21:44.650671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.738 [2024-10-08 09:21:44.650699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.650710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.650720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53600 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.650733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.650747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.650772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.650782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53608 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.650795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.650817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.650829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.650840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53616 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.650852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.650866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.650876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.650893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53624 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.650905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.650925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.650936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.650946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53632 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.650959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.650971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.650981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.650991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53640 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.651027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.651038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53648 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.651073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.651083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53656 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.651119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.651129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53664 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.651165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.651175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53672 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.651209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.651219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53680 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.651255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.651270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53688 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:59.739 [2024-10-08 09:21:44.651312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:59.739 [2024-10-08 09:21:44.651322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53696 len:8 PRP1 0x0 PRP2 0x0 00:15:59.739 [2024-10-08 09:21:44.651334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.739 [2024-10-08 09:21:44.651390] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bd410 was disconnected and freed. reset controller. 00:15:59.739 [2024-10-08 09:21:44.651408] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:59.739 [2024-10-08 09:21:44.651423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:59.739 [2024-10-08 09:21:44.655047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:59.739 [2024-10-08 09:21:44.655084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d2e0 (9): Bad file descriptor 00:15:59.739 [2024-10-08 09:21:44.689198] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:59.739 9262.80 IOPS, 36.18 MiB/s [2024-10-08T09:21:51.422Z] 9282.55 IOPS, 36.26 MiB/s [2024-10-08T09:21:51.422Z] 9300.67 IOPS, 36.33 MiB/s [2024-10-08T09:21:51.422Z] 9323.69 IOPS, 36.42 MiB/s [2024-10-08T09:21:51.422Z] 9341.43 IOPS, 36.49 MiB/s [2024-10-08T09:21:51.422Z] 9341.60 IOPS, 36.49 MiB/s 00:15:59.739 Latency(us) 00:15:59.739 [2024-10-08T09:21:51.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.739 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:59.739 Verification LBA range: start 0x0 length 0x4000 00:15:59.739 NVMe0n1 : 15.01 9343.28 36.50 222.57 0.00 13351.63 599.51 16086.11 00:15:59.739 [2024-10-08T09:21:51.422Z] =================================================================================================================== 00:15:59.739 [2024-10-08T09:21:51.422Z] Total : 9343.28 36.50 222.57 0.00 13351.63 599.51 16086.11 00:15:59.739 Received shutdown signal, test time was about 15.000000 seconds 00:15:59.739 00:15:59.739 Latency(us) 00:15:59.739 [2024-10-08T09:21:51.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.739 [2024-10-08T09:21:51.422Z] =================================================================================================================== 00:15:59.739 [2024-10-08T09:21:51.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75713 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75713 /var/tmp/bdevperf.sock 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75713 ']' 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.739 09:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:00.307 09:21:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.307 09:21:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:00.307 09:21:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:00.566 [2024-10-08 09:21:52.013958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:00.566 09:21:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:00.825 [2024-10-08 09:21:52.262412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:00.825 09:21:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:01.084 NVMe0n1 00:16:01.084 09:21:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:01.342 00:16:01.342 09:21:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:01.601 00:16:01.601 09:21:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:01.601 09:21:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:01.860 09:21:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.439 09:21:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:05.742 09:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:05.742 09:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:05.742 09:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75798 00:16:05.742 09:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:05.742 09:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75798 00:16:06.679 { 00:16:06.679 "results": [ 00:16:06.679 { 00:16:06.679 "job": "NVMe0n1", 00:16:06.679 "core_mask": "0x1", 00:16:06.679 "workload": "verify", 00:16:06.679 "status": "finished", 00:16:06.679 "verify_range": { 00:16:06.679 "start": 0, 00:16:06.679 "length": 16384 00:16:06.679 }, 00:16:06.679 "queue_depth": 128, 00:16:06.679 "io_size": 4096, 00:16:06.679 "runtime": 1.013734, 00:16:06.679 "iops": 7217.869776489691, 00:16:06.679 "mibps": 28.194803814412854, 00:16:06.679 "io_failed": 0, 00:16:06.679 "io_timeout": 0, 00:16:06.679 "avg_latency_us": 17665.360422428465, 00:16:06.679 "min_latency_us": 2174.6036363636363, 00:16:06.679 "max_latency_us": 15371.17090909091 00:16:06.679 } 00:16:06.679 ], 00:16:06.679 "core_count": 1 00:16:06.679 } 00:16:06.679 09:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:06.679 [2024-10-08 09:21:50.725365] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:16:06.679 [2024-10-08 09:21:50.725507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75713 ] 00:16:06.679 [2024-10-08 09:21:50.861231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.679 [2024-10-08 09:21:50.978627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.679 [2024-10-08 09:21:51.034794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.679 [2024-10-08 09:21:53.828143] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:06.679 [2024-10-08 09:21:53.828318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.679 [2024-10-08 09:21:53.828344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.680 [2024-10-08 09:21:53.828362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.680 [2024-10-08 09:21:53.828375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.680 [2024-10-08 09:21:53.828389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.680 [2024-10-08 09:21:53.828401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.680 [2024-10-08 09:21:53.828415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.680 [2024-10-08 09:21:53.828428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.680 [2024-10-08 09:21:53.828441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:06.680 [2024-10-08 09:21:53.828503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:06.680 [2024-10-08 09:21:53.828534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19602e0 (9): Bad file descriptor 00:16:06.680 [2024-10-08 09:21:53.837974] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:06.680 Running I/O for 1 seconds... 00:16:06.680 7188.00 IOPS, 28.08 MiB/s 00:16:06.680 Latency(us) 00:16:06.680 [2024-10-08T09:21:58.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.680 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.680 Verification LBA range: start 0x0 length 0x4000 00:16:06.680 NVMe0n1 : 1.01 7217.87 28.19 0.00 0.00 17665.36 2174.60 15371.17 00:16:06.680 [2024-10-08T09:21:58.363Z] =================================================================================================================== 00:16:06.680 [2024-10-08T09:21:58.363Z] Total : 7217.87 28.19 0.00 0.00 17665.36 2174.60 15371.17 00:16:06.680 09:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.680 09:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:06.938 09:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.197 09:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:07.197 09:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:07.456 09:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.715 09:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75713 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75713 ']' 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75713 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.031 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75713 00:16:11.289 killing process with pid 75713 00:16:11.289 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:11.289 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:11.289 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75713' 00:16:11.289 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75713 00:16:11.289 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75713 00:16:11.290 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:11.548 09:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.808 rmmod nvme_tcp 00:16:11.808 rmmod nvme_fabrics 00:16:11.808 rmmod nvme_keyring 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 75458 ']' 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 75458 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75458 ']' 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75458 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75458 00:16:11.808 killing process with pid 75458 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75458' 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75458 00:16:11.808 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75458 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:12.067 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:12.327 00:16:12.327 real 0m33.753s 00:16:12.327 user 2m9.945s 00:16:12.327 sys 0m5.338s 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.327 ************************************ 00:16:12.327 END TEST nvmf_failover 00:16:12.327 ************************************ 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.327 ************************************ 00:16:12.327 START TEST nvmf_host_discovery 00:16:12.327 ************************************ 00:16:12.327 09:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:12.587 * Looking for test storage... 00:16:12.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:12.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.587 --rc genhtml_branch_coverage=1 00:16:12.587 --rc genhtml_function_coverage=1 00:16:12.587 --rc genhtml_legend=1 00:16:12.587 --rc geninfo_all_blocks=1 00:16:12.587 --rc geninfo_unexecuted_blocks=1 00:16:12.587 00:16:12.587 ' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:12.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.587 --rc genhtml_branch_coverage=1 00:16:12.587 --rc genhtml_function_coverage=1 00:16:12.587 --rc genhtml_legend=1 00:16:12.587 --rc geninfo_all_blocks=1 00:16:12.587 --rc geninfo_unexecuted_blocks=1 00:16:12.587 00:16:12.587 ' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:12.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.587 --rc genhtml_branch_coverage=1 00:16:12.587 --rc genhtml_function_coverage=1 00:16:12.587 --rc genhtml_legend=1 00:16:12.587 --rc geninfo_all_blocks=1 00:16:12.587 --rc geninfo_unexecuted_blocks=1 00:16:12.587 00:16:12.587 ' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:12.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.587 --rc genhtml_branch_coverage=1 00:16:12.587 --rc genhtml_function_coverage=1 00:16:12.587 --rc genhtml_legend=1 00:16:12.587 --rc geninfo_all_blocks=1 00:16:12.587 --rc geninfo_unexecuted_blocks=1 00:16:12.587 00:16:12.587 ' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.587 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.588 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:12.588 Cannot find device "nvmf_init_br" 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:12.588 Cannot find device "nvmf_init_br2" 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:12.588 Cannot find device "nvmf_tgt_br" 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.588 Cannot find device "nvmf_tgt_br2" 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:12.588 Cannot find device "nvmf_init_br" 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:12.588 Cannot find device "nvmf_init_br2" 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:16:12.588 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:12.847 Cannot find device "nvmf_tgt_br" 00:16:12.847 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:16:12.847 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:12.847 Cannot find device "nvmf_tgt_br2" 00:16:12.847 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:16:12.847 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:12.847 Cannot find device "nvmf_br" 00:16:12.847 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:16:12.847 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:12.848 Cannot find device "nvmf_init_if" 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:12.848 Cannot find device "nvmf_init_if2" 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:12.848 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:13.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:13.107 00:16:13.107 --- 10.0.0.3 ping statistics --- 00:16:13.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.107 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:13.107 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:13.107 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:13.107 00:16:13.107 --- 10.0.0.4 ping statistics --- 00:16:13.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.107 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:13.107 00:16:13.107 --- 10.0.0.1 ping statistics --- 00:16:13.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.107 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:13.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:13.107 00:16:13.107 --- 10.0.0.2 ping statistics --- 00:16:13.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.107 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # return 0 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=76120 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 76120 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 76120 ']' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:13.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:13.107 09:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.107 [2024-10-08 09:22:04.711085] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:16:13.107 [2024-10-08 09:22:04.711835] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.366 [2024-10-08 09:22:04.852697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.366 [2024-10-08 09:22:04.967856] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.366 [2024-10-08 09:22:04.967914] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.366 [2024-10-08 09:22:04.967941] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.366 [2024-10-08 09:22:04.967951] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.366 [2024-10-08 09:22:04.967961] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.366 [2024-10-08 09:22:04.968442] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.366 [2024-10-08 09:22:05.026165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.302 [2024-10-08 09:22:05.788987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.302 [2024-10-08 09:22:05.797093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.302 null0 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.302 null1 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76154 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76154 /tmp/host.sock 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 76154 ']' 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.302 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:14.302 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.303 09:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.303 [2024-10-08 09:22:05.887298] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:16:14.303 [2024-10-08 09:22:05.887414] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76154 ] 00:16:14.562 [2024-10-08 09:22:06.026388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.562 [2024-10-08 09:22:06.147111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.562 [2024-10-08 09:22:06.203621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.498 09:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.758 [2024-10-08 09:22:07.245612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.758 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.017 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:16:16.017 09:22:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:16:16.275 [2024-10-08 09:22:07.881978] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:16.275 [2024-10-08 09:22:07.882036] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:16.275 [2024-10-08 09:22:07.882055] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:16.275 [2024-10-08 09:22:07.888019] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:16.275 [2024-10-08 09:22:07.944861] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:16.275 [2024-10-08 09:22:07.944889] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:16.843 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:16.843 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:16.843 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:16.844 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.844 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.844 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.844 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.844 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.844 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.844 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.104 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.105 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.364 [2024-10-08 09:22:08.835172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:17.364 [2024-10-08 09:22:08.836310] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:17.364 [2024-10-08 09:22:08.836341] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.364 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.365 [2024-10-08 09:22:08.842324] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.365 [2024-10-08 09:22:08.905831] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:17.365 [2024-10-08 09:22:08.905858] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:17.365 [2024-10-08 09:22:08.905865] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.365 09:22:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 [2024-10-08 09:22:09.059955] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:17.625 [2024-10-08 09:22:09.059990] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:17.625 [2024-10-08 09:22:09.062841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.625 [2024-10-08 09:22:09.062882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.625 [2024-10-08 09:22:09.062896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.625 [2024-10-08 09:22:09.062906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.625 [2024-10-08 09:22:09.062915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.625 [2024-10-08 09:22:09.062924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.625 [2024-10-08 09:22:09.062934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.625 [2024-10-08 09:22:09.062943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.625 [2024-10-08 09:22:09.062952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1089950 is same with the state(6) to be set 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.625 [2024-10-08 09:22:09.066021] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:17.625 [2024-10-08 09:22:09.066046] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:17.625 [2024-10-08 09:22:09.066136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1089950 (9): Bad file descriptor 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:17.625 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.626 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.908 09:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.862 [2024-10-08 09:22:10.472777] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:18.862 [2024-10-08 09:22:10.472804] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:18.862 [2024-10-08 09:22:10.472822] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:18.862 [2024-10-08 09:22:10.478816] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:18.862 [2024-10-08 09:22:10.540252] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:18.862 [2024-10-08 09:22:10.540331] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:18.862 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.862 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:18.862 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:18.862 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:18.862 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.121 request: 00:16:19.121 { 00:16:19.121 "name": "nvme", 00:16:19.121 "trtype": "tcp", 00:16:19.121 "traddr": "10.0.0.3", 00:16:19.121 "adrfam": "ipv4", 00:16:19.121 "trsvcid": "8009", 00:16:19.121 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:19.121 "wait_for_attach": true, 00:16:19.121 "method": "bdev_nvme_start_discovery", 00:16:19.121 "req_id": 1 00:16:19.121 } 00:16:19.121 Got JSON-RPC error response 00:16:19.121 response: 00:16:19.121 { 00:16:19.121 "code": -17, 00:16:19.121 "message": "File exists" 00:16:19.121 } 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:19.121 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.122 request: 00:16:19.122 { 00:16:19.122 "name": "nvme_second", 00:16:19.122 "trtype": "tcp", 00:16:19.122 "traddr": "10.0.0.3", 00:16:19.122 "adrfam": "ipv4", 00:16:19.122 "trsvcid": "8009", 00:16:19.122 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:19.122 "wait_for_attach": true, 00:16:19.122 "method": "bdev_nvme_start_discovery", 00:16:19.122 "req_id": 1 00:16:19.122 } 00:16:19.122 Got JSON-RPC error response 00:16:19.122 response: 00:16:19.122 { 00:16:19.122 "code": -17, 00:16:19.122 "message": "File exists" 00:16:19.122 } 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.122 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.381 09:22:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.318 [2024-10-08 09:22:11.820721] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:20.318 [2024-10-08 09:22:11.820997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f88f0 with addr=10.0.0.3, port=8010 00:16:20.318 [2024-10-08 09:22:11.821033] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:20.318 [2024-10-08 09:22:11.821044] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:20.318 [2024-10-08 09:22:11.821055] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:21.255 [2024-10-08 09:22:12.820787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:21.255 [2024-10-08 09:22:12.820918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f88f0 with addr=10.0.0.3, port=8010 00:16:21.255 [2024-10-08 09:22:12.820954] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:21.255 [2024-10-08 09:22:12.820965] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:21.255 [2024-10-08 09:22:12.820990] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:22.191 [2024-10-08 09:22:13.820580] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:22.191 request: 00:16:22.191 { 00:16:22.191 "name": "nvme_second", 00:16:22.191 "trtype": "tcp", 00:16:22.191 "traddr": "10.0.0.3", 00:16:22.191 "adrfam": "ipv4", 00:16:22.191 "trsvcid": "8010", 00:16:22.191 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:22.191 "wait_for_attach": false, 00:16:22.191 "attach_timeout_ms": 3000, 00:16:22.191 "method": "bdev_nvme_start_discovery", 00:16:22.191 "req_id": 1 00:16:22.191 } 00:16:22.191 Got JSON-RPC error response 00:16:22.191 response: 00:16:22.191 { 00:16:22.191 "code": -110, 00:16:22.191 "message": "Connection timed out" 00:16:22.191 } 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:22.191 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76154 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.450 09:22:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.450 rmmod nvme_tcp 00:16:22.450 rmmod nvme_fabrics 00:16:22.450 rmmod nvme_keyring 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 76120 ']' 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 76120 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 76120 ']' 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 76120 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76120 00:16:22.450 killing process with pid 76120 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76120' 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 76120 00:16:22.450 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 76120 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.709 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:22.710 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:22.969 00:16:22.969 real 0m10.597s 00:16:22.969 user 0m19.715s 00:16:22.969 sys 0m2.149s 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.969 ************************************ 00:16:22.969 END TEST nvmf_host_discovery 00:16:22.969 ************************************ 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.969 09:22:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.970 09:22:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.970 ************************************ 00:16:22.970 START TEST nvmf_host_multipath_status 00:16:22.970 ************************************ 00:16:22.970 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:23.229 * Looking for test storage... 00:16:23.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.229 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:23.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.230 --rc genhtml_branch_coverage=1 00:16:23.230 --rc genhtml_function_coverage=1 00:16:23.230 --rc genhtml_legend=1 00:16:23.230 --rc geninfo_all_blocks=1 00:16:23.230 --rc geninfo_unexecuted_blocks=1 00:16:23.230 00:16:23.230 ' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:23.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.230 --rc genhtml_branch_coverage=1 00:16:23.230 --rc genhtml_function_coverage=1 00:16:23.230 --rc genhtml_legend=1 00:16:23.230 --rc geninfo_all_blocks=1 00:16:23.230 --rc geninfo_unexecuted_blocks=1 00:16:23.230 00:16:23.230 ' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:23.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.230 --rc genhtml_branch_coverage=1 00:16:23.230 --rc genhtml_function_coverage=1 00:16:23.230 --rc genhtml_legend=1 00:16:23.230 --rc geninfo_all_blocks=1 00:16:23.230 --rc geninfo_unexecuted_blocks=1 00:16:23.230 00:16:23.230 ' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:23.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.230 --rc genhtml_branch_coverage=1 00:16:23.230 --rc genhtml_function_coverage=1 00:16:23.230 --rc genhtml_legend=1 00:16:23.230 --rc geninfo_all_blocks=1 00:16:23.230 --rc geninfo_unexecuted_blocks=1 00:16:23.230 00:16:23.230 ' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.230 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:23.230 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.231 Cannot find device "nvmf_init_br" 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.231 Cannot find device "nvmf_init_br2" 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.231 Cannot find device "nvmf_tgt_br" 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.231 Cannot find device "nvmf_tgt_br2" 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.231 Cannot find device "nvmf_init_br" 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.231 Cannot find device "nvmf_init_br2" 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:23.231 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.490 Cannot find device "nvmf_tgt_br" 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.490 Cannot find device "nvmf_tgt_br2" 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.490 Cannot find device "nvmf_br" 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.490 Cannot find device "nvmf_init_if" 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.490 Cannot find device "nvmf_init_if2" 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.490 09:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.490 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.491 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.491 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.491 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.491 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.491 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.491 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:16:23.750 00:16:23.750 --- 10.0.0.3 ping statistics --- 00:16:23.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.750 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.750 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.750 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:16:23.750 00:16:23.750 --- 10.0.0.4 ping statistics --- 00:16:23.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.750 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:23.750 00:16:23.750 --- 10.0.0.1 ping statistics --- 00:16:23.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.750 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:16:23.750 00:16:23.750 --- 10.0.0.2 ping statistics --- 00:16:23.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.750 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # return 0 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=76650 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 76650 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76650 ']' 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.750 09:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:23.750 [2024-10-08 09:22:15.287174] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:16:23.750 [2024-10-08 09:22:15.287610] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.750 [2024-10-08 09:22:15.426702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.014 [2024-10-08 09:22:15.528111] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.014 [2024-10-08 09:22:15.528334] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.014 [2024-10-08 09:22:15.528508] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.014 [2024-10-08 09:22:15.528559] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.014 [2024-10-08 09:22:15.528649] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.014 [2024-10-08 09:22:15.529212] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.014 [2024-10-08 09:22:15.529222] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.014 [2024-10-08 09:22:15.583320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76650 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:24.951 [2024-10-08 09:22:16.607070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.951 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:25.518 Malloc0 00:16:25.518 09:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:25.777 09:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.035 09:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:26.294 [2024-10-08 09:22:17.782646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:26.294 09:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:26.555 [2024-10-08 09:22:18.046844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76711 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76711 /var/tmp/bdevperf.sock 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76711 ']' 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.555 09:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:27.491 09:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.491 09:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:27.491 09:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:27.749 09:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:28.316 Nvme0n1 00:16:28.316 09:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:28.575 Nvme0n1 00:16:28.575 09:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:28.575 09:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:30.489 09:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:30.489 09:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:31.081 09:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:31.081 09:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:32.458 09:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:32.458 09:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:32.458 09:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.459 09:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:32.459 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.459 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:32.459 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.459 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:32.718 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.718 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:32.718 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.718 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:32.977 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.977 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:32.977 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.977 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.235 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.235 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:33.235 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.235 09:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:33.495 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.495 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:33.495 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.495 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:33.753 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.753 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:33.753 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:34.012 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:34.271 09:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:35.209 09:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:35.209 09:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:35.209 09:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.209 09:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:35.776 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:35.776 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:35.776 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.776 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.034 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.293 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.293 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.293 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.293 09:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:36.552 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.552 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:36.552 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.552 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:36.813 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.813 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:36.813 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:37.076 09:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:37.645 09:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:38.582 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:38.582 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:38.582 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.582 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.840 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.840 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:38.840 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.840 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.098 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:39.098 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.098 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.098 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:39.357 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.357 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:39.357 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.357 09:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:39.616 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.616 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:39.616 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:39.616 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.874 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.874 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:39.874 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.874 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.133 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.133 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:40.133 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:40.392 09:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:40.651 09:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:41.588 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:41.588 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:41.588 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.588 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:42.157 09:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.726 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.726 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:42.726 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.726 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.985 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.985 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.985 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.985 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:43.244 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.244 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:43.244 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.244 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:43.244 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:43.244 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:43.244 09:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:43.813 09:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:43.813 09:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.192 09:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:45.451 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.451 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:45.451 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.451 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:45.710 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.710 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:45.710 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.710 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:45.969 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.969 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:45.969 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.969 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:46.228 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.228 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:46.228 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.228 09:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:46.487 09:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.487 09:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:46.487 09:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:46.750 09:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:47.009 09:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:47.956 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:47.956 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:47.956 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.956 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:48.215 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.215 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:48.215 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.215 09:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:48.473 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.473 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:48.473 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.473 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:49.040 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.040 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:49.040 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.040 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:49.299 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.299 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:49.299 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.299 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:49.557 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:49.557 09:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:49.557 09:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.557 09:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:49.817 09:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.817 09:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:50.076 09:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:50.076 09:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:50.334 09:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:50.593 09:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:51.530 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:51.530 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:51.530 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.530 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:51.790 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.790 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:51.790 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.790 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:52.049 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.049 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:52.049 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:52.049 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.335 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.335 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:52.335 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.335 09:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:52.605 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.605 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:52.605 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:52.605 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.862 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.862 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:52.862 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:52.862 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.121 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.121 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:53.121 09:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:53.379 09:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:53.638 09:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.017 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:55.275 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.275 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:55.275 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.275 09:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:55.534 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.534 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:55.534 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.534 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:55.793 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.793 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:55.793 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.793 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:56.052 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.052 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:56.052 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:56.052 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.311 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.311 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:56.311 09:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:56.570 09:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:56.833 09:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:57.804 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:57.804 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:57.804 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.804 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:58.062 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.062 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:58.062 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.062 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:58.321 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.321 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:58.321 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.321 09:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:58.580 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.580 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:58.580 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.580 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:58.839 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.839 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:58.839 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.839 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:59.098 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.098 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:59.098 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.098 09:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:59.357 09:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.357 09:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:59.357 09:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:59.925 09:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:00.184 09:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:01.121 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:01.121 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:01.121 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.121 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:01.380 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.380 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:01.380 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.380 09:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:01.639 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:01.639 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:01.639 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:01.639 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.897 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.897 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:01.898 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:01.898 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.157 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.157 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:02.157 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:02.157 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.415 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.415 09:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:02.415 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.415 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76711 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76711 ']' 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76711 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76711 00:17:02.674 killing process with pid 76711 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76711' 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76711 00:17:02.674 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76711 00:17:02.674 { 00:17:02.674 "results": [ 00:17:02.674 { 00:17:02.674 "job": "Nvme0n1", 00:17:02.674 "core_mask": "0x4", 00:17:02.674 "workload": "verify", 00:17:02.674 "status": "terminated", 00:17:02.674 "verify_range": { 00:17:02.674 "start": 0, 00:17:02.674 "length": 16384 00:17:02.674 }, 00:17:02.674 "queue_depth": 128, 00:17:02.674 "io_size": 4096, 00:17:02.674 "runtime": 34.025825, 00:17:02.674 "iops": 7384.920130518511, 00:17:02.674 "mibps": 28.847344259837932, 00:17:02.674 "io_failed": 0, 00:17:02.674 "io_timeout": 0, 00:17:02.674 "avg_latency_us": 17301.76201442951, 00:17:02.674 "min_latency_us": 796.8581818181818, 00:17:02.674 "max_latency_us": 4026531.84 00:17:02.674 } 00:17:02.674 ], 00:17:02.674 "core_count": 1 00:17:02.674 } 00:17:02.938 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76711 00:17:02.938 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:02.938 [2024-10-08 09:22:18.110797] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:02.938 [2024-10-08 09:22:18.110901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76711 ] 00:17:02.938 [2024-10-08 09:22:18.249897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.938 [2024-10-08 09:22:18.367656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.938 [2024-10-08 09:22:18.431599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.938 Running I/O for 90 seconds... 00:17:02.938 6677.00 IOPS, 26.08 MiB/s [2024-10-08T09:22:54.621Z] 6858.50 IOPS, 26.79 MiB/s [2024-10-08T09:22:54.621Z] 7004.00 IOPS, 27.36 MiB/s [2024-10-08T09:22:54.621Z] 6808.75 IOPS, 26.60 MiB/s [2024-10-08T09:22:54.621Z] 6699.40 IOPS, 26.17 MiB/s [2024-10-08T09:22:54.621Z] 6860.00 IOPS, 26.80 MiB/s [2024-10-08T09:22:54.621Z] 7114.00 IOPS, 27.79 MiB/s [2024-10-08T09:22:54.621Z] 7130.75 IOPS, 27.85 MiB/s [2024-10-08T09:22:54.621Z] 7189.44 IOPS, 28.08 MiB/s [2024-10-08T09:22:54.621Z] 7337.70 IOPS, 28.66 MiB/s [2024-10-08T09:22:54.621Z] 7455.36 IOPS, 29.12 MiB/s [2024-10-08T09:22:54.621Z] 7560.08 IOPS, 29.53 MiB/s [2024-10-08T09:22:54.621Z] 7648.00 IOPS, 29.88 MiB/s [2024-10-08T09:22:54.621Z] 7710.79 IOPS, 30.12 MiB/s [2024-10-08T09:22:54.621Z] [2024-10-08 09:22:35.165816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.165893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.165970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.165991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.166028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.166065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.166103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.166138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.166172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.166206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.166960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.166985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.938 [2024-10-08 09:22:35.167288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.167322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.167364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.167401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.938 [2024-10-08 09:22:35.167436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:02.938 [2024-10-08 09:22:35.167455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.167887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.167927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.167962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.167983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.167998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.168475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.168973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.168993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.939 [2024-10-08 09:22:35.169460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:02.939 [2024-10-08 09:22:35.169727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.939 [2024-10-08 09:22:35.169742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.169761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.169789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.169810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.169826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.169861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.169876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.169897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.169912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.169932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.169947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.169967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.169983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.170004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.170019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.170869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:35.170897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.170930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.170947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.170974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.170990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:35.171968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:35.171983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:02.940 7758.87 IOPS, 30.31 MiB/s [2024-10-08T09:22:54.623Z] 7273.94 IOPS, 28.41 MiB/s [2024-10-08T09:22:54.623Z] 6846.06 IOPS, 26.74 MiB/s [2024-10-08T09:22:54.623Z] 6465.72 IOPS, 25.26 MiB/s [2024-10-08T09:22:54.623Z] 6172.95 IOPS, 24.11 MiB/s [2024-10-08T09:22:54.623Z] 6269.90 IOPS, 24.49 MiB/s [2024-10-08T09:22:54.623Z] 6358.00 IOPS, 24.84 MiB/s [2024-10-08T09:22:54.623Z] 6481.45 IOPS, 25.32 MiB/s [2024-10-08T09:22:54.623Z] 6624.09 IOPS, 25.88 MiB/s [2024-10-08T09:22:54.623Z] 6733.79 IOPS, 26.30 MiB/s [2024-10-08T09:22:54.623Z] 6829.32 IOPS, 26.68 MiB/s [2024-10-08T09:22:54.623Z] 6902.35 IOPS, 26.96 MiB/s [2024-10-08T09:22:54.623Z] 6949.22 IOPS, 27.15 MiB/s [2024-10-08T09:22:54.623Z] 7004.46 IOPS, 27.36 MiB/s [2024-10-08T09:22:54.623Z] 7086.72 IOPS, 27.68 MiB/s [2024-10-08T09:22:54.623Z] 7163.53 IOPS, 27.98 MiB/s [2024-10-08T09:22:54.623Z] 7237.23 IOPS, 28.27 MiB/s [2024-10-08T09:22:54.623Z] [2024-10-08 09:22:51.595730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.595819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.595894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.595945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.595970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.595985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.596019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.596052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.596086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.596119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.596187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.596220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.596255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.596288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.596321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.596363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.596400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.596419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.596433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.597246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.597273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.597299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.597315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.597335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.597350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.597369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.940 [2024-10-08 09:22:51.597383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.597402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.597416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.597436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.597450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.940 [2024-10-08 09:22:51.597470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.940 [2024-10-08 09:22:51.597484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.597517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.597551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.597585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.597650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.597685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.597720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.597754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.597840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.597877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.597918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.597955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.597991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.598006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.598041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.598253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.598346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.598383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.598420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.598555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.598571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.599486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.599528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.599715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.599765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.599903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.599938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.599972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.599992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.600016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.600052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.600085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.600119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.600152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.941 [2024-10-08 09:22:51.600186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.600236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.941 [2024-10-08 09:22:51.600271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:02.941 [2024-10-08 09:22:51.600291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.942 [2024-10-08 09:22:51.600306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:02.942 [2024-10-08 09:22:51.600326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.942 [2024-10-08 09:22:51.600341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:02.942 7291.44 IOPS, 28.48 MiB/s [2024-10-08T09:22:54.625Z] 7339.82 IOPS, 28.67 MiB/s [2024-10-08T09:22:54.625Z] 7384.41 IOPS, 28.85 MiB/s [2024-10-08T09:22:54.625Z] Received shutdown signal, test time was about 34.026550 seconds 00:17:02.942 00:17:02.942 Latency(us) 00:17:02.942 [2024-10-08T09:22:54.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.942 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:02.942 Verification LBA range: start 0x0 length 0x4000 00:17:02.942 Nvme0n1 : 34.03 7384.92 28.85 0.00 0.00 17301.76 796.86 4026531.84 00:17:02.942 [2024-10-08T09:22:54.625Z] =================================================================================================================== 00:17:02.942 [2024-10-08T09:22:54.625Z] Total : 7384.92 28.85 0.00 0.00 17301.76 796.86 4026531.84 00:17:02.942 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.201 rmmod nvme_tcp 00:17:03.201 rmmod nvme_fabrics 00:17:03.201 rmmod nvme_keyring 00:17:03.201 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 76650 ']' 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 76650 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76650 ']' 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76650 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76650 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.460 killing process with pid 76650 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76650' 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76650 00:17:03.460 09:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76650 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:03.719 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:03.979 00:17:03.979 real 0m40.924s 00:17:03.979 user 2m11.148s 00:17:03.979 sys 0m12.063s 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:03.979 ************************************ 00:17:03.979 END TEST nvmf_host_multipath_status 00:17:03.979 ************************************ 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.979 ************************************ 00:17:03.979 START TEST nvmf_discovery_remove_ifc 00:17:03.979 ************************************ 00:17:03.979 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:04.239 * Looking for test storage... 00:17:04.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:04.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.239 --rc genhtml_branch_coverage=1 00:17:04.239 --rc genhtml_function_coverage=1 00:17:04.239 --rc genhtml_legend=1 00:17:04.239 --rc geninfo_all_blocks=1 00:17:04.239 --rc geninfo_unexecuted_blocks=1 00:17:04.239 00:17:04.239 ' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:04.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.239 --rc genhtml_branch_coverage=1 00:17:04.239 --rc genhtml_function_coverage=1 00:17:04.239 --rc genhtml_legend=1 00:17:04.239 --rc geninfo_all_blocks=1 00:17:04.239 --rc geninfo_unexecuted_blocks=1 00:17:04.239 00:17:04.239 ' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:04.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.239 --rc genhtml_branch_coverage=1 00:17:04.239 --rc genhtml_function_coverage=1 00:17:04.239 --rc genhtml_legend=1 00:17:04.239 --rc geninfo_all_blocks=1 00:17:04.239 --rc geninfo_unexecuted_blocks=1 00:17:04.239 00:17:04.239 ' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:04.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.239 --rc genhtml_branch_coverage=1 00:17:04.239 --rc genhtml_function_coverage=1 00:17:04.239 --rc genhtml_legend=1 00:17:04.239 --rc geninfo_all_blocks=1 00:17:04.239 --rc geninfo_unexecuted_blocks=1 00:17:04.239 00:17:04.239 ' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.239 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.240 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:04.240 Cannot find device "nvmf_init_br" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:04.240 Cannot find device "nvmf_init_br2" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:04.240 Cannot find device "nvmf_tgt_br" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.240 Cannot find device "nvmf_tgt_br2" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:04.240 Cannot find device "nvmf_init_br" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:04.240 Cannot find device "nvmf_init_br2" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:04.240 Cannot find device "nvmf_tgt_br" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:04.240 Cannot find device "nvmf_tgt_br2" 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:04.240 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:04.499 Cannot find device "nvmf_br" 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:04.499 Cannot find device "nvmf_init_if" 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:04.499 Cannot find device "nvmf_init_if2" 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.499 09:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:04.499 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:04.500 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:04.500 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.500 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:04.500 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:04.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:17:04.500 00:17:04.500 --- 10.0.0.3 ping statistics --- 00:17:04.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.500 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:04.500 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:04.500 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:04.500 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:17:04.500 00:17:04.500 --- 10.0.0.4 ping statistics --- 00:17:04.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.500 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:04.500 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:04.500 00:17:04.500 --- 10.0.0.1 ping statistics --- 00:17:04.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.500 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:04.500 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:04.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:04.759 00:17:04.759 --- 10.0.0.2 ping statistics --- 00:17:04.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.759 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # return 0 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=77552 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 77552 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77552 ']' 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.759 09:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.759 [2024-10-08 09:22:56.277654] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:04.759 [2024-10-08 09:22:56.277787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.759 [2024-10-08 09:22:56.416579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.017 [2024-10-08 09:22:56.537044] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.017 [2024-10-08 09:22:56.537138] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.017 [2024-10-08 09:22:56.537162] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.017 [2024-10-08 09:22:56.537173] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.017 [2024-10-08 09:22:56.537183] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.017 [2024-10-08 09:22:56.537655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.017 [2024-10-08 09:22:56.594439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.952 [2024-10-08 09:22:57.370764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.952 [2024-10-08 09:22:57.378888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:05.952 null0 00:17:05.952 [2024-10-08 09:22:57.410799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77590 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77590 /tmp/host.sock 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77590 ']' 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.952 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.952 09:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.952 [2024-10-08 09:22:57.493560] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:05.952 [2024-10-08 09:22:57.493665] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77590 ] 00:17:06.212 [2024-10-08 09:22:57.635684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.212 [2024-10-08 09:22:57.765203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.151 [2024-10-08 09:22:58.575588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.151 09:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.087 [2024-10-08 09:22:59.644218] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:08.087 [2024-10-08 09:22:59.644269] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:08.087 [2024-10-08 09:22:59.644289] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:08.087 [2024-10-08 09:22:59.650269] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:08.087 [2024-10-08 09:22:59.708457] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:08.087 [2024-10-08 09:22:59.708521] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:08.087 [2024-10-08 09:22:59.708552] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:08.087 [2024-10-08 09:22:59.708569] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:08.087 [2024-10-08 09:22:59.708597] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:08.087 [2024-10-08 09:22:59.712616] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1264400 was disconnected and freed. delete nvme_qpair. 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:08.087 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:08.347 09:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:09.283 09:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:10.660 09:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:11.597 09:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:11.597 09:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.597 09:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.597 09:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:11.597 09:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:11.597 09:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:11.597 09:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:11.597 09:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.597 09:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:11.597 09:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:12.533 09:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:13.470 [2024-10-08 09:23:05.136760] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:13.470 [2024-10-08 09:23:05.137100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.470 [2024-10-08 09:23:05.137123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.470 [2024-10-08 09:23:05.137137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.470 [2024-10-08 09:23:05.137152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.470 [2024-10-08 09:23:05.137162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.470 [2024-10-08 09:23:05.137179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.470 [2024-10-08 09:23:05.137189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.470 [2024-10-08 09:23:05.137197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.470 [2024-10-08 09:23:05.137207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.470 [2024-10-08 09:23:05.137216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.470 [2024-10-08 09:23:05.137226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1237f70 is same with the state(6) to be set 00:17:13.470 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.470 [2024-10-08 09:23:05.146743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1237f70 (9): Bad file descriptor 00:17:13.728 [2024-10-08 09:23:05.156779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.728 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:13.729 09:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.666 [2024-10-08 09:23:06.197809] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:14.666 [2024-10-08 09:23:06.197874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1237f70 with addr=10.0.0.3, port=4420 00:17:14.666 [2024-10-08 09:23:06.197893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1237f70 is same with the state(6) to be set 00:17:14.666 [2024-10-08 09:23:06.197921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1237f70 (9): Bad file descriptor 00:17:14.666 [2024-10-08 09:23:06.198240] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.666 [2024-10-08 09:23:06.198297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:14.666 [2024-10-08 09:23:06.198308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:14.666 [2024-10-08 09:23:06.198318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:14.666 [2024-10-08 09:23:06.198336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.666 [2024-10-08 09:23:06.198346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:14.666 09:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.602 [2024-10-08 09:23:07.198400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:15.602 [2024-10-08 09:23:07.198465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:15.602 [2024-10-08 09:23:07.198477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:15.602 [2024-10-08 09:23:07.198491] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:15.602 [2024-10-08 09:23:07.198514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.602 [2024-10-08 09:23:07.198547] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:15.602 [2024-10-08 09:23:07.198593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.602 [2024-10-08 09:23:07.198621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.602 [2024-10-08 09:23:07.198654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.602 [2024-10-08 09:23:07.198671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.602 [2024-10-08 09:23:07.198682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.602 [2024-10-08 09:23:07.198691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.602 [2024-10-08 09:23:07.198700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.602 [2024-10-08 09:23:07.198719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.602 [2024-10-08 09:23:07.198729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.602 [2024-10-08 09:23:07.198807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.602 [2024-10-08 09:23:07.198818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:15.602 [2024-10-08 09:23:07.198897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ccd70 (9): Bad file descriptor 00:17:15.602 [2024-10-08 09:23:07.199891] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:15.602 [2024-10-08 09:23:07.200098] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.602 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:15.861 09:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:16.798 09:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:17.735 [2024-10-08 09:23:09.208652] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:17.735 [2024-10-08 09:23:09.208693] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:17.735 [2024-10-08 09:23:09.208712] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:17.735 [2024-10-08 09:23:09.214709] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:17.735 [2024-10-08 09:23:09.271543] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:17.735 [2024-10-08 09:23:09.271756] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:17.735 [2024-10-08 09:23:09.271846] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:17.735 [2024-10-08 09:23:09.271989] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:17.735 [2024-10-08 09:23:09.272048] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:17.735 [2024-10-08 09:23:09.277052] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1270c30 was disconnected and freed. delete nvme_qpair. 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77590 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77590 ']' 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77590 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77590 00:17:17.994 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:17.994 killing process with pid 77590 00:17:17.995 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:17.995 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77590' 00:17:17.995 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77590 00:17:17.995 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77590 00:17:18.254 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:18.254 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:18.254 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:18.254 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.254 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:18.254 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.254 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.254 rmmod nvme_tcp 00:17:18.254 rmmod nvme_fabrics 00:17:18.254 rmmod nvme_keyring 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 77552 ']' 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 77552 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77552 ']' 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77552 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77552 00:17:18.513 killing process with pid 77552 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77552' 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77552 00:17:18.513 09:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77552 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.772 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:19.031 00:17:19.031 real 0m14.871s 00:17:19.031 user 0m25.306s 00:17:19.031 sys 0m2.726s 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.031 ************************************ 00:17:19.031 END TEST nvmf_discovery_remove_ifc 00:17:19.031 ************************************ 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.031 ************************************ 00:17:19.031 START TEST nvmf_identify_kernel_target 00:17:19.031 ************************************ 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:19.031 * Looking for test storage... 00:17:19.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.031 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.032 --rc genhtml_branch_coverage=1 00:17:19.032 --rc genhtml_function_coverage=1 00:17:19.032 --rc genhtml_legend=1 00:17:19.032 --rc geninfo_all_blocks=1 00:17:19.032 --rc geninfo_unexecuted_blocks=1 00:17:19.032 00:17:19.032 ' 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.032 --rc genhtml_branch_coverage=1 00:17:19.032 --rc genhtml_function_coverage=1 00:17:19.032 --rc genhtml_legend=1 00:17:19.032 --rc geninfo_all_blocks=1 00:17:19.032 --rc geninfo_unexecuted_blocks=1 00:17:19.032 00:17:19.032 ' 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.032 --rc genhtml_branch_coverage=1 00:17:19.032 --rc genhtml_function_coverage=1 00:17:19.032 --rc genhtml_legend=1 00:17:19.032 --rc geninfo_all_blocks=1 00:17:19.032 --rc geninfo_unexecuted_blocks=1 00:17:19.032 00:17:19.032 ' 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:19.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.032 --rc genhtml_branch_coverage=1 00:17:19.032 --rc genhtml_function_coverage=1 00:17:19.032 --rc genhtml_legend=1 00:17:19.032 --rc geninfo_all_blocks=1 00:17:19.032 --rc geninfo_unexecuted_blocks=1 00:17:19.032 00:17:19.032 ' 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.032 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.292 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:19.292 Cannot find device "nvmf_init_br" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:19.292 Cannot find device "nvmf_init_br2" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:19.292 Cannot find device "nvmf_tgt_br" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.292 Cannot find device "nvmf_tgt_br2" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:19.292 Cannot find device "nvmf_init_br" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:19.292 Cannot find device "nvmf_init_br2" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:19.292 Cannot find device "nvmf_tgt_br" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:19.292 Cannot find device "nvmf_tgt_br2" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:19.292 Cannot find device "nvmf_br" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:19.292 Cannot find device "nvmf_init_if" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:19.292 Cannot find device "nvmf_init_if2" 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:19.292 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.293 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.552 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.552 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:19.552 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:19.552 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:19.552 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:19.552 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:19.552 09:23:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:19.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:19.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.183 ms 00:17:19.552 00:17:19.552 --- 10.0.0.3 ping statistics --- 00:17:19.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.552 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:19.552 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:19.552 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:17:19.552 00:17:19.552 --- 10.0.0.4 ping statistics --- 00:17:19.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.552 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:19.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:19.552 00:17:19.552 --- 10.0.0.1 ping statistics --- 00:17:19.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.552 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:19.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:19.552 00:17:19.552 --- 10.0.0.2 ping statistics --- 00:17:19.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.552 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # return 0 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:19.552 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:19.553 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:20.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:20.120 Waiting for block devices as requested 00:17:20.120 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:20.120 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:20.120 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:20.379 No valid GPT data, bailing 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:20.379 No valid GPT data, bailing 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:20.379 No valid GPT data, bailing 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:20.379 09:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:20.379 No valid GPT data, bailing 00:17:20.379 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -a 10.0.0.1 -t tcp -s 4420 00:17:20.639 00:17:20.639 Discovery Log Number of Records 2, Generation counter 2 00:17:20.639 =====Discovery Log Entry 0====== 00:17:20.639 trtype: tcp 00:17:20.639 adrfam: ipv4 00:17:20.639 subtype: current discovery subsystem 00:17:20.639 treq: not specified, sq flow control disable supported 00:17:20.639 portid: 1 00:17:20.639 trsvcid: 4420 00:17:20.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:20.639 traddr: 10.0.0.1 00:17:20.639 eflags: none 00:17:20.639 sectype: none 00:17:20.639 =====Discovery Log Entry 1====== 00:17:20.639 trtype: tcp 00:17:20.639 adrfam: ipv4 00:17:20.639 subtype: nvme subsystem 00:17:20.639 treq: not specified, sq flow control disable supported 00:17:20.639 portid: 1 00:17:20.639 trsvcid: 4420 00:17:20.639 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:20.639 traddr: 10.0.0.1 00:17:20.639 eflags: none 00:17:20.639 sectype: none 00:17:20.639 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:20.639 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:20.639 ===================================================== 00:17:20.639 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:20.639 ===================================================== 00:17:20.639 Controller Capabilities/Features 00:17:20.639 ================================ 00:17:20.639 Vendor ID: 0000 00:17:20.639 Subsystem Vendor ID: 0000 00:17:20.639 Serial Number: a66733a142bec81bdc90 00:17:20.639 Model Number: Linux 00:17:20.639 Firmware Version: 6.8.9-20 00:17:20.639 Recommended Arb Burst: 0 00:17:20.639 IEEE OUI Identifier: 00 00 00 00:17:20.639 Multi-path I/O 00:17:20.639 May have multiple subsystem ports: No 00:17:20.639 May have multiple controllers: No 00:17:20.639 Associated with SR-IOV VF: No 00:17:20.639 Max Data Transfer Size: Unlimited 00:17:20.639 Max Number of Namespaces: 0 00:17:20.639 Max Number of I/O Queues: 1024 00:17:20.639 NVMe Specification Version (VS): 1.3 00:17:20.639 NVMe Specification Version (Identify): 1.3 00:17:20.639 Maximum Queue Entries: 1024 00:17:20.639 Contiguous Queues Required: No 00:17:20.639 Arbitration Mechanisms Supported 00:17:20.639 Weighted Round Robin: Not Supported 00:17:20.639 Vendor Specific: Not Supported 00:17:20.639 Reset Timeout: 7500 ms 00:17:20.639 Doorbell Stride: 4 bytes 00:17:20.639 NVM Subsystem Reset: Not Supported 00:17:20.639 Command Sets Supported 00:17:20.639 NVM Command Set: Supported 00:17:20.639 Boot Partition: Not Supported 00:17:20.639 Memory Page Size Minimum: 4096 bytes 00:17:20.639 Memory Page Size Maximum: 4096 bytes 00:17:20.639 Persistent Memory Region: Not Supported 00:17:20.639 Optional Asynchronous Events Supported 00:17:20.639 Namespace Attribute Notices: Not Supported 00:17:20.639 Firmware Activation Notices: Not Supported 00:17:20.639 ANA Change Notices: Not Supported 00:17:20.639 PLE Aggregate Log Change Notices: Not Supported 00:17:20.639 LBA Status Info Alert Notices: Not Supported 00:17:20.639 EGE Aggregate Log Change Notices: Not Supported 00:17:20.639 Normal NVM Subsystem Shutdown event: Not Supported 00:17:20.639 Zone Descriptor Change Notices: Not Supported 00:17:20.639 Discovery Log Change Notices: Supported 00:17:20.639 Controller Attributes 00:17:20.639 128-bit Host Identifier: Not Supported 00:17:20.639 Non-Operational Permissive Mode: Not Supported 00:17:20.639 NVM Sets: Not Supported 00:17:20.639 Read Recovery Levels: Not Supported 00:17:20.639 Endurance Groups: Not Supported 00:17:20.639 Predictable Latency Mode: Not Supported 00:17:20.639 Traffic Based Keep ALive: Not Supported 00:17:20.639 Namespace Granularity: Not Supported 00:17:20.639 SQ Associations: Not Supported 00:17:20.639 UUID List: Not Supported 00:17:20.639 Multi-Domain Subsystem: Not Supported 00:17:20.639 Fixed Capacity Management: Not Supported 00:17:20.639 Variable Capacity Management: Not Supported 00:17:20.639 Delete Endurance Group: Not Supported 00:17:20.639 Delete NVM Set: Not Supported 00:17:20.639 Extended LBA Formats Supported: Not Supported 00:17:20.639 Flexible Data Placement Supported: Not Supported 00:17:20.639 00:17:20.639 Controller Memory Buffer Support 00:17:20.639 ================================ 00:17:20.639 Supported: No 00:17:20.639 00:17:20.639 Persistent Memory Region Support 00:17:20.639 ================================ 00:17:20.639 Supported: No 00:17:20.639 00:17:20.639 Admin Command Set Attributes 00:17:20.640 ============================ 00:17:20.640 Security Send/Receive: Not Supported 00:17:20.640 Format NVM: Not Supported 00:17:20.640 Firmware Activate/Download: Not Supported 00:17:20.640 Namespace Management: Not Supported 00:17:20.640 Device Self-Test: Not Supported 00:17:20.640 Directives: Not Supported 00:17:20.640 NVMe-MI: Not Supported 00:17:20.640 Virtualization Management: Not Supported 00:17:20.640 Doorbell Buffer Config: Not Supported 00:17:20.640 Get LBA Status Capability: Not Supported 00:17:20.640 Command & Feature Lockdown Capability: Not Supported 00:17:20.640 Abort Command Limit: 1 00:17:20.640 Async Event Request Limit: 1 00:17:20.640 Number of Firmware Slots: N/A 00:17:20.640 Firmware Slot 1 Read-Only: N/A 00:17:20.640 Firmware Activation Without Reset: N/A 00:17:20.640 Multiple Update Detection Support: N/A 00:17:20.640 Firmware Update Granularity: No Information Provided 00:17:20.640 Per-Namespace SMART Log: No 00:17:20.640 Asymmetric Namespace Access Log Page: Not Supported 00:17:20.640 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:20.640 Command Effects Log Page: Not Supported 00:17:20.640 Get Log Page Extended Data: Supported 00:17:20.640 Telemetry Log Pages: Not Supported 00:17:20.640 Persistent Event Log Pages: Not Supported 00:17:20.640 Supported Log Pages Log Page: May Support 00:17:20.640 Commands Supported & Effects Log Page: Not Supported 00:17:20.640 Feature Identifiers & Effects Log Page:May Support 00:17:20.640 NVMe-MI Commands & Effects Log Page: May Support 00:17:20.640 Data Area 4 for Telemetry Log: Not Supported 00:17:20.640 Error Log Page Entries Supported: 1 00:17:20.640 Keep Alive: Not Supported 00:17:20.640 00:17:20.640 NVM Command Set Attributes 00:17:20.640 ========================== 00:17:20.640 Submission Queue Entry Size 00:17:20.640 Max: 1 00:17:20.640 Min: 1 00:17:20.640 Completion Queue Entry Size 00:17:20.640 Max: 1 00:17:20.640 Min: 1 00:17:20.640 Number of Namespaces: 0 00:17:20.640 Compare Command: Not Supported 00:17:20.640 Write Uncorrectable Command: Not Supported 00:17:20.640 Dataset Management Command: Not Supported 00:17:20.640 Write Zeroes Command: Not Supported 00:17:20.640 Set Features Save Field: Not Supported 00:17:20.640 Reservations: Not Supported 00:17:20.640 Timestamp: Not Supported 00:17:20.640 Copy: Not Supported 00:17:20.640 Volatile Write Cache: Not Present 00:17:20.640 Atomic Write Unit (Normal): 1 00:17:20.640 Atomic Write Unit (PFail): 1 00:17:20.640 Atomic Compare & Write Unit: 1 00:17:20.640 Fused Compare & Write: Not Supported 00:17:20.640 Scatter-Gather List 00:17:20.640 SGL Command Set: Supported 00:17:20.640 SGL Keyed: Not Supported 00:17:20.640 SGL Bit Bucket Descriptor: Not Supported 00:17:20.640 SGL Metadata Pointer: Not Supported 00:17:20.640 Oversized SGL: Not Supported 00:17:20.640 SGL Metadata Address: Not Supported 00:17:20.640 SGL Offset: Supported 00:17:20.640 Transport SGL Data Block: Not Supported 00:17:20.640 Replay Protected Memory Block: Not Supported 00:17:20.640 00:17:20.640 Firmware Slot Information 00:17:20.640 ========================= 00:17:20.640 Active slot: 0 00:17:20.640 00:17:20.640 00:17:20.640 Error Log 00:17:20.640 ========= 00:17:20.640 00:17:20.640 Active Namespaces 00:17:20.640 ================= 00:17:20.640 Discovery Log Page 00:17:20.640 ================== 00:17:20.640 Generation Counter: 2 00:17:20.640 Number of Records: 2 00:17:20.640 Record Format: 0 00:17:20.640 00:17:20.640 Discovery Log Entry 0 00:17:20.640 ---------------------- 00:17:20.640 Transport Type: 3 (TCP) 00:17:20.640 Address Family: 1 (IPv4) 00:17:20.640 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:20.640 Entry Flags: 00:17:20.640 Duplicate Returned Information: 0 00:17:20.640 Explicit Persistent Connection Support for Discovery: 0 00:17:20.640 Transport Requirements: 00:17:20.640 Secure Channel: Not Specified 00:17:20.640 Port ID: 1 (0x0001) 00:17:20.640 Controller ID: 65535 (0xffff) 00:17:20.640 Admin Max SQ Size: 32 00:17:20.640 Transport Service Identifier: 4420 00:17:20.640 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:20.640 Transport Address: 10.0.0.1 00:17:20.640 Discovery Log Entry 1 00:17:20.640 ---------------------- 00:17:20.640 Transport Type: 3 (TCP) 00:17:20.640 Address Family: 1 (IPv4) 00:17:20.640 Subsystem Type: 2 (NVM Subsystem) 00:17:20.640 Entry Flags: 00:17:20.640 Duplicate Returned Information: 0 00:17:20.640 Explicit Persistent Connection Support for Discovery: 0 00:17:20.640 Transport Requirements: 00:17:20.640 Secure Channel: Not Specified 00:17:20.640 Port ID: 1 (0x0001) 00:17:20.640 Controller ID: 65535 (0xffff) 00:17:20.640 Admin Max SQ Size: 32 00:17:20.640 Transport Service Identifier: 4420 00:17:20.640 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:20.640 Transport Address: 10.0.0.1 00:17:20.640 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:20.900 get_feature(0x01) failed 00:17:20.900 get_feature(0x02) failed 00:17:20.900 get_feature(0x04) failed 00:17:20.900 ===================================================== 00:17:20.900 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:20.900 ===================================================== 00:17:20.900 Controller Capabilities/Features 00:17:20.900 ================================ 00:17:20.900 Vendor ID: 0000 00:17:20.900 Subsystem Vendor ID: 0000 00:17:20.900 Serial Number: 9ba798e3171482ca8fb5 00:17:20.900 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:20.900 Firmware Version: 6.8.9-20 00:17:20.900 Recommended Arb Burst: 6 00:17:20.900 IEEE OUI Identifier: 00 00 00 00:17:20.900 Multi-path I/O 00:17:20.900 May have multiple subsystem ports: Yes 00:17:20.900 May have multiple controllers: Yes 00:17:20.900 Associated with SR-IOV VF: No 00:17:20.900 Max Data Transfer Size: Unlimited 00:17:20.900 Max Number of Namespaces: 1024 00:17:20.900 Max Number of I/O Queues: 128 00:17:20.900 NVMe Specification Version (VS): 1.3 00:17:20.900 NVMe Specification Version (Identify): 1.3 00:17:20.900 Maximum Queue Entries: 1024 00:17:20.900 Contiguous Queues Required: No 00:17:20.900 Arbitration Mechanisms Supported 00:17:20.900 Weighted Round Robin: Not Supported 00:17:20.900 Vendor Specific: Not Supported 00:17:20.900 Reset Timeout: 7500 ms 00:17:20.900 Doorbell Stride: 4 bytes 00:17:20.900 NVM Subsystem Reset: Not Supported 00:17:20.900 Command Sets Supported 00:17:20.900 NVM Command Set: Supported 00:17:20.900 Boot Partition: Not Supported 00:17:20.900 Memory Page Size Minimum: 4096 bytes 00:17:20.900 Memory Page Size Maximum: 4096 bytes 00:17:20.900 Persistent Memory Region: Not Supported 00:17:20.900 Optional Asynchronous Events Supported 00:17:20.900 Namespace Attribute Notices: Supported 00:17:20.900 Firmware Activation Notices: Not Supported 00:17:20.900 ANA Change Notices: Supported 00:17:20.900 PLE Aggregate Log Change Notices: Not Supported 00:17:20.900 LBA Status Info Alert Notices: Not Supported 00:17:20.900 EGE Aggregate Log Change Notices: Not Supported 00:17:20.900 Normal NVM Subsystem Shutdown event: Not Supported 00:17:20.900 Zone Descriptor Change Notices: Not Supported 00:17:20.900 Discovery Log Change Notices: Not Supported 00:17:20.900 Controller Attributes 00:17:20.900 128-bit Host Identifier: Supported 00:17:20.900 Non-Operational Permissive Mode: Not Supported 00:17:20.900 NVM Sets: Not Supported 00:17:20.900 Read Recovery Levels: Not Supported 00:17:20.900 Endurance Groups: Not Supported 00:17:20.900 Predictable Latency Mode: Not Supported 00:17:20.900 Traffic Based Keep ALive: Supported 00:17:20.900 Namespace Granularity: Not Supported 00:17:20.900 SQ Associations: Not Supported 00:17:20.900 UUID List: Not Supported 00:17:20.900 Multi-Domain Subsystem: Not Supported 00:17:20.900 Fixed Capacity Management: Not Supported 00:17:20.900 Variable Capacity Management: Not Supported 00:17:20.900 Delete Endurance Group: Not Supported 00:17:20.900 Delete NVM Set: Not Supported 00:17:20.900 Extended LBA Formats Supported: Not Supported 00:17:20.900 Flexible Data Placement Supported: Not Supported 00:17:20.900 00:17:20.900 Controller Memory Buffer Support 00:17:20.900 ================================ 00:17:20.900 Supported: No 00:17:20.900 00:17:20.900 Persistent Memory Region Support 00:17:20.900 ================================ 00:17:20.900 Supported: No 00:17:20.900 00:17:20.900 Admin Command Set Attributes 00:17:20.900 ============================ 00:17:20.900 Security Send/Receive: Not Supported 00:17:20.900 Format NVM: Not Supported 00:17:20.900 Firmware Activate/Download: Not Supported 00:17:20.900 Namespace Management: Not Supported 00:17:20.900 Device Self-Test: Not Supported 00:17:20.900 Directives: Not Supported 00:17:20.900 NVMe-MI: Not Supported 00:17:20.900 Virtualization Management: Not Supported 00:17:20.900 Doorbell Buffer Config: Not Supported 00:17:20.900 Get LBA Status Capability: Not Supported 00:17:20.900 Command & Feature Lockdown Capability: Not Supported 00:17:20.900 Abort Command Limit: 4 00:17:20.900 Async Event Request Limit: 4 00:17:20.900 Number of Firmware Slots: N/A 00:17:20.900 Firmware Slot 1 Read-Only: N/A 00:17:20.900 Firmware Activation Without Reset: N/A 00:17:20.900 Multiple Update Detection Support: N/A 00:17:20.900 Firmware Update Granularity: No Information Provided 00:17:20.900 Per-Namespace SMART Log: Yes 00:17:20.900 Asymmetric Namespace Access Log Page: Supported 00:17:20.900 ANA Transition Time : 10 sec 00:17:20.900 00:17:20.900 Asymmetric Namespace Access Capabilities 00:17:20.900 ANA Optimized State : Supported 00:17:20.900 ANA Non-Optimized State : Supported 00:17:20.900 ANA Inaccessible State : Supported 00:17:20.900 ANA Persistent Loss State : Supported 00:17:20.900 ANA Change State : Supported 00:17:20.900 ANAGRPID is not changed : No 00:17:20.901 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:20.901 00:17:20.901 ANA Group Identifier Maximum : 128 00:17:20.901 Number of ANA Group Identifiers : 128 00:17:20.901 Max Number of Allowed Namespaces : 1024 00:17:20.901 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:20.901 Command Effects Log Page: Supported 00:17:20.901 Get Log Page Extended Data: Supported 00:17:20.901 Telemetry Log Pages: Not Supported 00:17:20.901 Persistent Event Log Pages: Not Supported 00:17:20.901 Supported Log Pages Log Page: May Support 00:17:20.901 Commands Supported & Effects Log Page: Not Supported 00:17:20.901 Feature Identifiers & Effects Log Page:May Support 00:17:20.901 NVMe-MI Commands & Effects Log Page: May Support 00:17:20.901 Data Area 4 for Telemetry Log: Not Supported 00:17:20.901 Error Log Page Entries Supported: 128 00:17:20.901 Keep Alive: Supported 00:17:20.901 Keep Alive Granularity: 1000 ms 00:17:20.901 00:17:20.901 NVM Command Set Attributes 00:17:20.901 ========================== 00:17:20.901 Submission Queue Entry Size 00:17:20.901 Max: 64 00:17:20.901 Min: 64 00:17:20.901 Completion Queue Entry Size 00:17:20.901 Max: 16 00:17:20.901 Min: 16 00:17:20.901 Number of Namespaces: 1024 00:17:20.901 Compare Command: Not Supported 00:17:20.901 Write Uncorrectable Command: Not Supported 00:17:20.901 Dataset Management Command: Supported 00:17:20.901 Write Zeroes Command: Supported 00:17:20.901 Set Features Save Field: Not Supported 00:17:20.901 Reservations: Not Supported 00:17:20.901 Timestamp: Not Supported 00:17:20.901 Copy: Not Supported 00:17:20.901 Volatile Write Cache: Present 00:17:20.901 Atomic Write Unit (Normal): 1 00:17:20.901 Atomic Write Unit (PFail): 1 00:17:20.901 Atomic Compare & Write Unit: 1 00:17:20.901 Fused Compare & Write: Not Supported 00:17:20.901 Scatter-Gather List 00:17:20.901 SGL Command Set: Supported 00:17:20.901 SGL Keyed: Not Supported 00:17:20.901 SGL Bit Bucket Descriptor: Not Supported 00:17:20.901 SGL Metadata Pointer: Not Supported 00:17:20.901 Oversized SGL: Not Supported 00:17:20.901 SGL Metadata Address: Not Supported 00:17:20.901 SGL Offset: Supported 00:17:20.901 Transport SGL Data Block: Not Supported 00:17:20.901 Replay Protected Memory Block: Not Supported 00:17:20.901 00:17:20.901 Firmware Slot Information 00:17:20.901 ========================= 00:17:20.901 Active slot: 0 00:17:20.901 00:17:20.901 Asymmetric Namespace Access 00:17:20.901 =========================== 00:17:20.901 Change Count : 0 00:17:20.901 Number of ANA Group Descriptors : 1 00:17:20.901 ANA Group Descriptor : 0 00:17:20.901 ANA Group ID : 1 00:17:20.901 Number of NSID Values : 1 00:17:20.901 Change Count : 0 00:17:20.901 ANA State : 1 00:17:20.901 Namespace Identifier : 1 00:17:20.901 00:17:20.901 Commands Supported and Effects 00:17:20.901 ============================== 00:17:20.901 Admin Commands 00:17:20.901 -------------- 00:17:20.901 Get Log Page (02h): Supported 00:17:20.901 Identify (06h): Supported 00:17:20.901 Abort (08h): Supported 00:17:20.901 Set Features (09h): Supported 00:17:20.901 Get Features (0Ah): Supported 00:17:20.901 Asynchronous Event Request (0Ch): Supported 00:17:20.901 Keep Alive (18h): Supported 00:17:20.901 I/O Commands 00:17:20.901 ------------ 00:17:20.901 Flush (00h): Supported 00:17:20.901 Write (01h): Supported LBA-Change 00:17:20.901 Read (02h): Supported 00:17:20.901 Write Zeroes (08h): Supported LBA-Change 00:17:20.901 Dataset Management (09h): Supported 00:17:20.901 00:17:20.901 Error Log 00:17:20.901 ========= 00:17:20.901 Entry: 0 00:17:20.901 Error Count: 0x3 00:17:20.901 Submission Queue Id: 0x0 00:17:20.901 Command Id: 0x5 00:17:20.901 Phase Bit: 0 00:17:20.901 Status Code: 0x2 00:17:20.901 Status Code Type: 0x0 00:17:20.901 Do Not Retry: 1 00:17:20.901 Error Location: 0x28 00:17:20.901 LBA: 0x0 00:17:20.901 Namespace: 0x0 00:17:20.901 Vendor Log Page: 0x0 00:17:20.901 ----------- 00:17:20.901 Entry: 1 00:17:20.901 Error Count: 0x2 00:17:20.901 Submission Queue Id: 0x0 00:17:20.901 Command Id: 0x5 00:17:20.901 Phase Bit: 0 00:17:20.901 Status Code: 0x2 00:17:20.901 Status Code Type: 0x0 00:17:20.901 Do Not Retry: 1 00:17:20.901 Error Location: 0x28 00:17:20.901 LBA: 0x0 00:17:20.901 Namespace: 0x0 00:17:20.901 Vendor Log Page: 0x0 00:17:20.901 ----------- 00:17:20.901 Entry: 2 00:17:20.901 Error Count: 0x1 00:17:20.901 Submission Queue Id: 0x0 00:17:20.901 Command Id: 0x4 00:17:20.901 Phase Bit: 0 00:17:20.901 Status Code: 0x2 00:17:20.901 Status Code Type: 0x0 00:17:20.901 Do Not Retry: 1 00:17:20.901 Error Location: 0x28 00:17:20.901 LBA: 0x0 00:17:20.901 Namespace: 0x0 00:17:20.901 Vendor Log Page: 0x0 00:17:20.901 00:17:20.901 Number of Queues 00:17:20.901 ================ 00:17:20.901 Number of I/O Submission Queues: 128 00:17:20.901 Number of I/O Completion Queues: 128 00:17:20.901 00:17:20.901 ZNS Specific Controller Data 00:17:20.901 ============================ 00:17:20.901 Zone Append Size Limit: 0 00:17:20.901 00:17:20.901 00:17:20.901 Active Namespaces 00:17:20.901 ================= 00:17:20.901 get_feature(0x05) failed 00:17:20.901 Namespace ID:1 00:17:20.901 Command Set Identifier: NVM (00h) 00:17:20.901 Deallocate: Supported 00:17:20.901 Deallocated/Unwritten Error: Not Supported 00:17:20.901 Deallocated Read Value: Unknown 00:17:20.901 Deallocate in Write Zeroes: Not Supported 00:17:20.901 Deallocated Guard Field: 0xFFFF 00:17:20.901 Flush: Supported 00:17:20.901 Reservation: Not Supported 00:17:20.901 Namespace Sharing Capabilities: Multiple Controllers 00:17:20.901 Size (in LBAs): 1310720 (5GiB) 00:17:20.901 Capacity (in LBAs): 1310720 (5GiB) 00:17:20.901 Utilization (in LBAs): 1310720 (5GiB) 00:17:20.901 UUID: 49e56bf4-7048-42eb-b805-4ad0e8a76d27 00:17:20.901 Thin Provisioning: Not Supported 00:17:20.901 Per-NS Atomic Units: Yes 00:17:20.901 Atomic Boundary Size (Normal): 0 00:17:20.901 Atomic Boundary Size (PFail): 0 00:17:20.901 Atomic Boundary Offset: 0 00:17:20.901 NGUID/EUI64 Never Reused: No 00:17:20.901 ANA group ID: 1 00:17:20.901 Namespace Write Protected: No 00:17:20.901 Number of LBA Formats: 1 00:17:20.901 Current LBA Format: LBA Format #00 00:17:20.901 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:20.901 00:17:20.901 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:20.901 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:20.901 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:20.901 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.901 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:20.901 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.901 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.901 rmmod nvme_tcp 00:17:21.160 rmmod nvme_fabrics 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.160 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.161 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:21.161 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.161 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.161 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:17:21.419 09:23:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:21.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.245 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.245 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.245 ************************************ 00:17:22.245 END TEST nvmf_identify_kernel_target 00:17:22.245 ************************************ 00:17:22.245 00:17:22.245 real 0m3.322s 00:17:22.245 user 0m1.147s 00:17:22.245 sys 0m1.503s 00:17:22.245 09:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.245 09:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.245 09:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:22.245 09:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:22.245 09:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.245 09:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.245 ************************************ 00:17:22.245 START TEST nvmf_auth_host 00:17:22.245 ************************************ 00:17:22.245 09:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:22.505 * Looking for test storage... 00:17:22.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.505 09:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:22.505 09:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:22.505 09:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.505 --rc genhtml_branch_coverage=1 00:17:22.505 --rc genhtml_function_coverage=1 00:17:22.505 --rc genhtml_legend=1 00:17:22.505 --rc geninfo_all_blocks=1 00:17:22.505 --rc geninfo_unexecuted_blocks=1 00:17:22.505 00:17:22.505 ' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.505 --rc genhtml_branch_coverage=1 00:17:22.505 --rc genhtml_function_coverage=1 00:17:22.505 --rc genhtml_legend=1 00:17:22.505 --rc geninfo_all_blocks=1 00:17:22.505 --rc geninfo_unexecuted_blocks=1 00:17:22.505 00:17:22.505 ' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.505 --rc genhtml_branch_coverage=1 00:17:22.505 --rc genhtml_function_coverage=1 00:17:22.505 --rc genhtml_legend=1 00:17:22.505 --rc geninfo_all_blocks=1 00:17:22.505 --rc geninfo_unexecuted_blocks=1 00:17:22.505 00:17:22.505 ' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.505 --rc genhtml_branch_coverage=1 00:17:22.505 --rc genhtml_function_coverage=1 00:17:22.505 --rc genhtml_legend=1 00:17:22.505 --rc geninfo_all_blocks=1 00:17:22.505 --rc geninfo_unexecuted_blocks=1 00:17:22.505 00:17:22.505 ' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.505 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.506 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:22.506 Cannot find device "nvmf_init_br" 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:22.506 Cannot find device "nvmf_init_br2" 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:22.506 Cannot find device "nvmf_tgt_br" 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.506 Cannot find device "nvmf_tgt_br2" 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:22.506 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:22.765 Cannot find device "nvmf_init_br" 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:22.765 Cannot find device "nvmf_init_br2" 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:22.765 Cannot find device "nvmf_tgt_br" 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:22.765 Cannot find device "nvmf_tgt_br2" 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:22.765 Cannot find device "nvmf_br" 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:22.765 Cannot find device "nvmf_init_if" 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:22.765 Cannot find device "nvmf_init_if2" 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:22.765 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:23.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:23.024 00:17:23.024 --- 10.0.0.3 ping statistics --- 00:17:23.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.024 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:23.024 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:23.024 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:23.024 00:17:23.024 --- 10.0.0.4 ping statistics --- 00:17:23.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.024 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:23.024 00:17:23.024 --- 10.0.0.1 ping statistics --- 00:17:23.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.024 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:23.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:23.024 00:17:23.024 --- 10.0.0.2 ping statistics --- 00:17:23.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.024 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # return 0 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=78588 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 78588 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78588 ']' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.024 09:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f4cc609cfea188a23290274b8ac72f04 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.3Ja 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f4cc609cfea188a23290274b8ac72f04 0 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f4cc609cfea188a23290274b8ac72f04 0 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f4cc609cfea188a23290274b8ac72f04 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.3Ja 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.3Ja 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3Ja 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a3650d2e333f98120b4676357a8669cf677d79fbdac8c811f9107086eee1c826 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.FrE 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a3650d2e333f98120b4676357a8669cf677d79fbdac8c811f9107086eee1c826 3 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a3650d2e333f98120b4676357a8669cf677d79fbdac8c811f9107086eee1c826 3 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a3650d2e333f98120b4676357a8669cf677d79fbdac8c811f9107086eee1c826 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.FrE 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.FrE 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.FrE 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=63f3a176f347804e87f4dabe0aea182af3af1dfdc8402a45 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.H7B 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 63f3a176f347804e87f4dabe0aea182af3af1dfdc8402a45 0 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 63f3a176f347804e87f4dabe0aea182af3af1dfdc8402a45 0 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=63f3a176f347804e87f4dabe0aea182af3af1dfdc8402a45 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.H7B 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.H7B 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.H7B 00:17:24.402 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:24.403 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.403 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.403 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.403 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:17:24.403 09:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ea88679e1146146454042b6cc580ca0d2b7a80b09bb10fc8 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.aWF 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ea88679e1146146454042b6cc580ca0d2b7a80b09bb10fc8 2 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ea88679e1146146454042b6cc580ca0d2b7a80b09bb10fc8 2 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ea88679e1146146454042b6cc580ca0d2b7a80b09bb10fc8 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.aWF 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.aWF 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.aWF 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0d0027d652151486cea605a64183c24b 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.pdx 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0d0027d652151486cea605a64183c24b 1 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0d0027d652151486cea605a64183c24b 1 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0d0027d652151486cea605a64183c24b 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:17:24.403 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.661 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.pdx 00:17:24.661 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.pdx 00:17:24.661 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pdx 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bf6ef2cd4c26c0357ebcbda2314dfd62 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.uFc 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bf6ef2cd4c26c0357ebcbda2314dfd62 1 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bf6ef2cd4c26c0357ebcbda2314dfd62 1 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bf6ef2cd4c26c0357ebcbda2314dfd62 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.uFc 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.uFc 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uFc 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e9e57a283c973f8366ef1dd7eddbf449ac726f03b4b9336c 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.DBh 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e9e57a283c973f8366ef1dd7eddbf449ac726f03b4b9336c 2 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e9e57a283c973f8366ef1dd7eddbf449ac726f03b4b9336c 2 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e9e57a283c973f8366ef1dd7eddbf449ac726f03b4b9336c 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.DBh 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.DBh 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.DBh 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bb0b09defbd06237dacf0480c2ca6400 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.G4q 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bb0b09defbd06237dacf0480c2ca6400 0 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bb0b09defbd06237dacf0480c2ca6400 0 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bb0b09defbd06237dacf0480c2ca6400 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:24.662 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.G4q 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.G4q 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.G4q 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d8a3312396185e75d3b74850fc9acc4c8c5b5aaea0e7d2776999ca19e0b88a93 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Fwh 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d8a3312396185e75d3b74850fc9acc4c8c5b5aaea0e7d2776999ca19e0b88a93 3 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d8a3312396185e75d3b74850fc9acc4c8c5b5aaea0e7d2776999ca19e0b88a93 3 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d8a3312396185e75d3b74850fc9acc4c8c5b5aaea0e7d2776999ca19e0b88a93 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Fwh 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Fwh 00:17:24.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Fwh 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78588 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78588 ']' 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.921 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3Ja 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.FrE ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FrE 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.H7B 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.aWF ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aWF 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pdx 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uFc ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uFc 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.DBh 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.G4q ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.G4q 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Fwh 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:25.182 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:17:25.183 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:17:25.444 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:25.444 09:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:25.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:25.703 Waiting for block devices as requested 00:17:25.703 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:25.962 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:26.530 No valid GPT data, bailing 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:26.530 No valid GPT data, bailing 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:26.530 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:26.789 No valid GPT data, bailing 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:26.789 No valid GPT data, bailing 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -a 10.0.0.1 -t tcp -s 4420 00:17:26.789 00:17:26.789 Discovery Log Number of Records 2, Generation counter 2 00:17:26.789 =====Discovery Log Entry 0====== 00:17:26.789 trtype: tcp 00:17:26.789 adrfam: ipv4 00:17:26.789 subtype: current discovery subsystem 00:17:26.789 treq: not specified, sq flow control disable supported 00:17:26.789 portid: 1 00:17:26.789 trsvcid: 4420 00:17:26.789 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:26.789 traddr: 10.0.0.1 00:17:26.789 eflags: none 00:17:26.789 sectype: none 00:17:26.789 =====Discovery Log Entry 1====== 00:17:26.789 trtype: tcp 00:17:26.789 adrfam: ipv4 00:17:26.789 subtype: nvme subsystem 00:17:26.789 treq: not specified, sq flow control disable supported 00:17:26.789 portid: 1 00:17:26.789 trsvcid: 4420 00:17:26.789 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:26.789 traddr: 10.0.0.1 00:17:26.789 eflags: none 00:17:26.789 sectype: none 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.789 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.047 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.048 nvme0n1 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.048 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.310 nvme0n1 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.310 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.569 nvme0n1 00:17:27.569 09:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.569 nvme0n1 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.569 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.829 nvme0n1 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.829 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.088 nvme0n1 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.088 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.347 09:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.606 nvme0n1 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.606 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.607 nvme0n1 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.607 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.866 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.867 nvme0n1 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.867 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 nvme0n1 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:29.126 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.127 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.385 nvme0n1 00:17:29.385 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.385 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.385 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.385 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.385 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.385 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.386 09:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.953 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.211 nvme0n1 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.211 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.212 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.471 nvme0n1 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.471 09:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.471 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.730 nvme0n1 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.730 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.731 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.990 nvme0n1 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.990 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.249 nvme0n1 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.249 09:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.199 nvme0n1 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.199 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.458 09:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.718 nvme0n1 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.718 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.977 nvme0n1 00:17:33.977 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.977 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.977 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.977 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.977 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.235 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.235 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.235 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.235 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.236 09:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.494 nvme0n1 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:34.494 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.495 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.062 nvme0n1 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.062 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.063 09:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.631 nvme0n1 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.631 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.564 nvme0n1 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.564 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.565 09:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.136 nvme0n1 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.136 09:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.703 nvme0n1 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.703 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.704 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 nvme0n1 00:17:38.271 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.271 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.271 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.271 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.271 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.271 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.272 09:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.531 nvme0n1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.531 nvme0n1 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.531 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.791 nvme0n1 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.791 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:38.792 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.792 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:38.792 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:38.792 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:38.792 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.792 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.792 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 nvme0n1 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 nvme0n1 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.311 nvme0n1 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.311 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.570 09:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.570 nvme0n1 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.570 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.571 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.830 nvme0n1 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.830 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.831 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.125 nvme0n1 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.125 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.126 nvme0n1 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.126 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.408 09:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 nvme0n1 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.408 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.668 nvme0n1 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.668 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.928 nvme0n1 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.928 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.188 nvme0n1 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.188 09:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.447 nvme0n1 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.447 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.706 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.706 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.706 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:41.706 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:41.706 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:41.706 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.706 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.707 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.966 nvme0n1 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.966 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.225 nvme0n1 00:17:42.225 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.225 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.225 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.225 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.225 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.225 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.484 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.484 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.484 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.484 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.485 09:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.744 nvme0n1 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.744 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.003 nvme0n1 00:17:43.003 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.003 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.003 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.003 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.003 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.262 09:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.522 nvme0n1 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.522 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.090 nvme0n1 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.090 09:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.658 nvme0n1 00:17:44.658 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.658 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.658 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.658 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.658 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.658 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.917 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.918 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.486 nvme0n1 00:17:45.486 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.486 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.486 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.486 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.486 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.486 09:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.486 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.055 nvme0n1 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.055 09:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.623 nvme0n1 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.623 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.624 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.883 nvme0n1 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.883 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 nvme0n1 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 nvme0n1 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 nvme0n1 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 09:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:47.403 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:47.404 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:47.404 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.404 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.404 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.663 nvme0n1 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.663 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.922 nvme0n1 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.922 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.923 nvme0n1 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.923 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:48.182 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.183 nvme0n1 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.183 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.442 nvme0n1 00:17:48.442 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.442 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.442 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.442 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.442 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.442 09:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.442 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.443 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.702 nvme0n1 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.702 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 nvme0n1 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.962 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.222 nvme0n1 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.222 09:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.482 nvme0n1 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.482 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.741 nvme0n1 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.741 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.742 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.000 nvme0n1 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.000 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.001 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.568 nvme0n1 00:17:50.568 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.568 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.568 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.568 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.568 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.568 09:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.568 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.569 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.828 nvme0n1 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:50.828 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.829 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.087 nvme0n1 00:17:51.088 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.347 09:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 nvme0n1 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:51.608 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.609 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.177 nvme0n1 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRjYzYwOWNmZWExODhhMjMyOTAyNzRiOGFjNzJmMDQivq8Q: 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTM2NTBkMmUzMzNmOTgxMjBiNDY3NjM1N2E4NjY5Y2Y2NzdkNzlmYmRhYzhjODExZjkxMDcwODZlZWUxYzgyNleZUOQ=: 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.177 09:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.745 nvme0n1 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.745 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.313 nvme0n1 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:53.313 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.314 09:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.881 nvme0n1 00:17:53.881 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.881 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.881 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.881 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.881 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.881 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTllNTdhMjgzYzk3M2Y4MzY2ZWYxZGQ3ZWRkYmY0NDlhYzcyNmYwM2I0YjkzMzZjaLAkCw==: 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmIwYjA5ZGVmYmQwNjIzN2RhY2YwNDgwYzJjYTY0MDADdcnx: 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.140 09:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.708 nvme0n1 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhhMzMxMjM5NjE4NWU3NWQzYjc0ODUwZmM5YWNjNGM4YzViNWFhZWEwZTdkMjc3Njk5OWNhMTllMGI4OGE5M8SVaUo=: 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.708 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.277 nvme0n1 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.277 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.278 request: 00:17:55.278 { 00:17:55.278 "name": "nvme0", 00:17:55.278 "trtype": "tcp", 00:17:55.278 "traddr": "10.0.0.1", 00:17:55.278 "adrfam": "ipv4", 00:17:55.278 "trsvcid": "4420", 00:17:55.278 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:55.278 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:55.278 "prchk_reftag": false, 00:17:55.278 "prchk_guard": false, 00:17:55.278 "hdgst": false, 00:17:55.278 "ddgst": false, 00:17:55.278 "allow_unrecognized_csi": false, 00:17:55.278 "method": "bdev_nvme_attach_controller", 00:17:55.278 "req_id": 1 00:17:55.278 } 00:17:55.278 Got JSON-RPC error response 00:17:55.278 response: 00:17:55.278 { 00:17:55.278 "code": -5, 00:17:55.278 "message": "Input/output error" 00:17:55.278 } 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.278 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.537 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:55.537 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:55.537 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:55.537 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:55.537 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:55.537 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.537 09:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.537 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.537 request: 00:17:55.537 { 00:17:55.537 "name": "nvme0", 00:17:55.537 "trtype": "tcp", 00:17:55.537 "traddr": "10.0.0.1", 00:17:55.537 "adrfam": "ipv4", 00:17:55.537 "trsvcid": "4420", 00:17:55.537 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:55.537 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:55.537 "prchk_reftag": false, 00:17:55.537 "prchk_guard": false, 00:17:55.537 "hdgst": false, 00:17:55.537 "ddgst": false, 00:17:55.538 "dhchap_key": "key2", 00:17:55.538 "allow_unrecognized_csi": false, 00:17:55.538 "method": "bdev_nvme_attach_controller", 00:17:55.538 "req_id": 1 00:17:55.538 } 00:17:55.538 Got JSON-RPC error response 00:17:55.538 response: 00:17:55.538 { 00:17:55.538 "code": -5, 00:17:55.538 "message": "Input/output error" 00:17:55.538 } 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.538 request: 00:17:55.538 { 00:17:55.538 "name": "nvme0", 00:17:55.538 "trtype": "tcp", 00:17:55.538 "traddr": "10.0.0.1", 00:17:55.538 "adrfam": "ipv4", 00:17:55.538 "trsvcid": "4420", 00:17:55.538 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:55.538 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:55.538 "prchk_reftag": false, 00:17:55.538 "prchk_guard": false, 00:17:55.538 "hdgst": false, 00:17:55.538 "ddgst": false, 00:17:55.538 "dhchap_key": "key1", 00:17:55.538 "dhchap_ctrlr_key": "ckey2", 00:17:55.538 "allow_unrecognized_csi": false, 00:17:55.538 "method": "bdev_nvme_attach_controller", 00:17:55.538 "req_id": 1 00:17:55.538 } 00:17:55.538 Got JSON-RPC error response 00:17:55.538 response: 00:17:55.538 { 00:17:55.538 "code": -5, 00:17:55.538 "message": "Input/output error" 00:17:55.538 } 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.538 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.797 nvme0n1 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.797 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.798 request: 00:17:55.798 { 00:17:55.798 "name": "nvme0", 00:17:55.798 "dhchap_key": "key1", 00:17:55.798 "dhchap_ctrlr_key": "ckey2", 00:17:55.798 "method": "bdev_nvme_set_keys", 00:17:55.798 "req_id": 1 00:17:55.798 } 00:17:55.798 Got JSON-RPC error response 00:17:55.798 response: 00:17:55.798 { 00:17:55.798 "code": -13, 00:17:55.798 "message": "Permission denied" 00:17:55.798 } 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:55.798 09:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:56.762 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.762 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:56.762 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.762 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmM2ExNzZmMzQ3ODA0ZTg3ZjRkYWJlMGFlYTE4MmFmM2FmMWRmZGM4NDAyYTQ1eHM6ZA==: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE4ODY3OWUxMTQ2MTQ2NDU0MDQyYjZjYzU4MGNhMGQyYjdhODBiMDliYjEwZmM43U47RQ==: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.021 nvme0n1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQwMDI3ZDY1MjE1MTQ4NmNlYTYwNWE2NDE4M2MyNGKnjaiH: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmY2ZWYyY2Q0YzI2YzAzNTdlYmNiZGEyMzE0ZGZkNjIDq4Lm: 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.021 request: 00:17:57.021 { 00:17:57.021 "name": "nvme0", 00:17:57.021 "dhchap_key": "key2", 00:17:57.021 "dhchap_ctrlr_key": "ckey1", 00:17:57.021 "method": "bdev_nvme_set_keys", 00:17:57.021 "req_id": 1 00:17:57.021 } 00:17:57.021 Got JSON-RPC error response 00:17:57.021 response: 00:17:57.021 { 00:17:57.021 "code": -13, 00:17:57.021 "message": "Permission denied" 00:17:57.021 } 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:57.021 09:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.398 rmmod nvme_tcp 00:17:58.398 rmmod nvme_fabrics 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 78588 ']' 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 78588 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 78588 ']' 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 78588 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78588 00:17:58.398 killing process with pid 78588 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78588' 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 78588 00:17:58.398 09:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 78588 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.657 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:17:58.916 09:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:59.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:59.743 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:59.743 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:59.743 09:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3Ja /tmp/spdk.key-null.H7B /tmp/spdk.key-sha256.pdx /tmp/spdk.key-sha384.DBh /tmp/spdk.key-sha512.Fwh /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:59.743 09:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:00.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:00.311 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:00.311 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:00.311 ************************************ 00:18:00.311 END TEST nvmf_auth_host 00:18:00.311 ************************************ 00:18:00.311 00:18:00.311 real 0m37.884s 00:18:00.311 user 0m34.772s 00:18:00.311 sys 0m4.208s 00:18:00.311 09:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.311 09:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.311 09:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.312 ************************************ 00:18:00.312 START TEST nvmf_digest 00:18:00.312 ************************************ 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:00.312 * Looking for test storage... 00:18:00.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:18:00.312 09:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:00.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.572 --rc genhtml_branch_coverage=1 00:18:00.572 --rc genhtml_function_coverage=1 00:18:00.572 --rc genhtml_legend=1 00:18:00.572 --rc geninfo_all_blocks=1 00:18:00.572 --rc geninfo_unexecuted_blocks=1 00:18:00.572 00:18:00.572 ' 00:18:00.572 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:00.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.572 --rc genhtml_branch_coverage=1 00:18:00.572 --rc genhtml_function_coverage=1 00:18:00.572 --rc genhtml_legend=1 00:18:00.572 --rc geninfo_all_blocks=1 00:18:00.573 --rc geninfo_unexecuted_blocks=1 00:18:00.573 00:18:00.573 ' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:00.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.573 --rc genhtml_branch_coverage=1 00:18:00.573 --rc genhtml_function_coverage=1 00:18:00.573 --rc genhtml_legend=1 00:18:00.573 --rc geninfo_all_blocks=1 00:18:00.573 --rc geninfo_unexecuted_blocks=1 00:18:00.573 00:18:00.573 ' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:00.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.573 --rc genhtml_branch_coverage=1 00:18:00.573 --rc genhtml_function_coverage=1 00:18:00.573 --rc genhtml_legend=1 00:18:00.573 --rc geninfo_all_blocks=1 00:18:00.573 --rc geninfo_unexecuted_blocks=1 00:18:00.573 00:18:00.573 ' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.573 Cannot find device "nvmf_init_br" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.573 Cannot find device "nvmf_init_br2" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.573 Cannot find device "nvmf_tgt_br" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.573 Cannot find device "nvmf_tgt_br2" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.573 Cannot find device "nvmf_init_br" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.573 Cannot find device "nvmf_init_br2" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.573 Cannot find device "nvmf_tgt_br" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.573 Cannot find device "nvmf_tgt_br2" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.573 Cannot find device "nvmf_br" 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:00.573 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.573 Cannot find device "nvmf_init_if" 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.574 Cannot find device "nvmf_init_if2" 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.574 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:18:00.833 00:18:00.833 --- 10.0.0.3 ping statistics --- 00:18:00.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.833 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.833 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.833 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:18:00.833 00:18:00.833 --- 10.0.0.4 ping statistics --- 00:18:00.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.833 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:00.833 00:18:00.833 --- 10.0.0.1 ping statistics --- 00:18:00.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.833 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:00.833 00:18:00.833 --- 10.0.0.2 ping statistics --- 00:18:00.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.833 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # return 0 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:00.833 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:01.092 ************************************ 00:18:01.092 START TEST nvmf_digest_clean 00:18:01.092 ************************************ 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:01.092 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=80262 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 80262 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80262 ']' 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.093 09:23:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:01.093 [2024-10-08 09:23:52.600954] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:01.093 [2024-10-08 09:23:52.601065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.093 [2024-10-08 09:23:52.745251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.351 [2024-10-08 09:23:52.877607] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.351 [2024-10-08 09:23:52.877669] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.351 [2024-10-08 09:23:52.877684] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.351 [2024-10-08 09:23:52.877694] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.351 [2024-10-08 09:23:52.877703] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.351 [2024-10-08 09:23:52.878199] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.287 [2024-10-08 09:23:53.760352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.287 null0 00:18:02.287 [2024-10-08 09:23:53.815975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.287 [2024-10-08 09:23:53.840100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:02.287 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80300 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80300 /var/tmp/bperf.sock 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80300 ']' 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.288 09:23:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.288 [2024-10-08 09:23:53.907095] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:02.288 [2024-10-08 09:23:53.907193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80300 ] 00:18:02.546 [2024-10-08 09:23:54.045921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.547 [2024-10-08 09:23:54.184320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.482 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.482 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:03.482 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:03.482 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:03.482 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:03.742 [2024-10-08 09:23:55.403780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:04.000 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.001 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.260 nvme0n1 00:18:04.260 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:04.260 09:23:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:04.519 Running I/O for 2 seconds... 00:18:06.391 14732.00 IOPS, 57.55 MiB/s [2024-10-08T09:23:58.074Z] 15240.00 IOPS, 59.53 MiB/s 00:18:06.391 Latency(us) 00:18:06.391 [2024-10-08T09:23:58.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.391 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:06.391 nvme0n1 : 2.01 15262.15 59.62 0.00 0.00 8381.19 7030.23 24903.68 00:18:06.391 [2024-10-08T09:23:58.074Z] =================================================================================================================== 00:18:06.391 [2024-10-08T09:23:58.074Z] Total : 15262.15 59.62 0.00 0.00 8381.19 7030.23 24903.68 00:18:06.391 { 00:18:06.391 "results": [ 00:18:06.391 { 00:18:06.391 "job": "nvme0n1", 00:18:06.391 "core_mask": "0x2", 00:18:06.391 "workload": "randread", 00:18:06.391 "status": "finished", 00:18:06.391 "queue_depth": 128, 00:18:06.391 "io_size": 4096, 00:18:06.391 "runtime": 2.005484, 00:18:06.391 "iops": 15262.151181460435, 00:18:06.391 "mibps": 59.61777805257982, 00:18:06.391 "io_failed": 0, 00:18:06.391 "io_timeout": 0, 00:18:06.391 "avg_latency_us": 8381.194985505868, 00:18:06.391 "min_latency_us": 7030.225454545454, 00:18:06.391 "max_latency_us": 24903.68 00:18:06.391 } 00:18:06.391 ], 00:18:06.391 "core_count": 1 00:18:06.391 } 00:18:06.391 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:06.391 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:06.391 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:06.391 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:06.391 | select(.opcode=="crc32c") 00:18:06.391 | "\(.module_name) \(.executed)"' 00:18:06.391 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80300 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80300 ']' 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80300 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80300 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:06.650 killing process with pid 80300 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80300' 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80300 00:18:06.650 Received shutdown signal, test time was about 2.000000 seconds 00:18:06.650 00:18:06.650 Latency(us) 00:18:06.650 [2024-10-08T09:23:58.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.650 [2024-10-08T09:23:58.333Z] =================================================================================================================== 00:18:06.650 [2024-10-08T09:23:58.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.650 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80300 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80360 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80360 /var/tmp/bperf.sock 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80360 ']' 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.909 09:23:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:07.168 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:07.168 Zero copy mechanism will not be used. 00:18:07.168 [2024-10-08 09:23:58.599681] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:07.168 [2024-10-08 09:23:58.599797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80360 ] 00:18:07.168 [2024-10-08 09:23:58.738679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.168 [2024-10-08 09:23:58.846676] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.103 09:23:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.103 09:23:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:08.103 09:23:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:08.103 09:23:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:08.103 09:23:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:08.366 [2024-10-08 09:23:59.861028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.366 09:23:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.366 09:23:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.636 nvme0n1 00:18:08.636 09:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:08.636 09:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:08.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:08.895 Zero copy mechanism will not be used. 00:18:08.895 Running I/O for 2 seconds... 00:18:10.768 6864.00 IOPS, 858.00 MiB/s [2024-10-08T09:24:02.451Z] 6864.00 IOPS, 858.00 MiB/s 00:18:10.768 Latency(us) 00:18:10.768 [2024-10-08T09:24:02.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.768 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:10.768 nvme0n1 : 2.00 6863.12 857.89 0.00 0.00 2328.27 2040.55 5332.25 00:18:10.768 [2024-10-08T09:24:02.451Z] =================================================================================================================== 00:18:10.768 [2024-10-08T09:24:02.451Z] Total : 6863.12 857.89 0.00 0.00 2328.27 2040.55 5332.25 00:18:10.768 { 00:18:10.768 "results": [ 00:18:10.768 { 00:18:10.768 "job": "nvme0n1", 00:18:10.768 "core_mask": "0x2", 00:18:10.768 "workload": "randread", 00:18:10.768 "status": "finished", 00:18:10.768 "queue_depth": 16, 00:18:10.768 "io_size": 131072, 00:18:10.768 "runtime": 2.002588, 00:18:10.768 "iops": 6863.119123853733, 00:18:10.768 "mibps": 857.8898904817166, 00:18:10.768 "io_failed": 0, 00:18:10.768 "io_timeout": 0, 00:18:10.768 "avg_latency_us": 2328.2719102550536, 00:18:10.768 "min_latency_us": 2040.5527272727272, 00:18:10.768 "max_latency_us": 5332.2472727272725 00:18:10.768 } 00:18:10.768 ], 00:18:10.768 "core_count": 1 00:18:10.768 } 00:18:10.768 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:10.768 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:10.768 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:10.768 | select(.opcode=="crc32c") 00:18:10.768 | "\(.module_name) \(.executed)"' 00:18:10.768 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:10.768 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80360 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80360 ']' 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80360 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80360 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:11.336 killing process with pid 80360 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80360' 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80360 00:18:11.336 Received shutdown signal, test time was about 2.000000 seconds 00:18:11.336 00:18:11.336 Latency(us) 00:18:11.336 [2024-10-08T09:24:03.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.336 [2024-10-08T09:24:03.019Z] =================================================================================================================== 00:18:11.336 [2024-10-08T09:24:03.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80360 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80421 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80421 /var/tmp/bperf.sock 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80421 ']' 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.336 09:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:11.595 [2024-10-08 09:24:03.049349] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:11.595 [2024-10-08 09:24:03.049431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80421 ] 00:18:11.595 [2024-10-08 09:24:03.183617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.854 [2024-10-08 09:24:03.292477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.421 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.421 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:12.421 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:12.421 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:12.421 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:12.680 [2024-10-08 09:24:04.323192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.938 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.938 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.197 nvme0n1 00:18:13.197 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:13.197 09:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:13.197 Running I/O for 2 seconds... 00:18:15.525 17527.00 IOPS, 68.46 MiB/s [2024-10-08T09:24:07.208Z] 20348.50 IOPS, 79.49 MiB/s 00:18:15.525 Latency(us) 00:18:15.525 [2024-10-08T09:24:07.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.525 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.525 nvme0n1 : 2.01 20336.89 79.44 0.00 0.00 6279.22 3872.58 15490.33 00:18:15.525 [2024-10-08T09:24:07.208Z] =================================================================================================================== 00:18:15.525 [2024-10-08T09:24:07.208Z] Total : 20336.89 79.44 0.00 0.00 6279.22 3872.58 15490.33 00:18:15.525 { 00:18:15.525 "results": [ 00:18:15.525 { 00:18:15.525 "job": "nvme0n1", 00:18:15.525 "core_mask": "0x2", 00:18:15.525 "workload": "randwrite", 00:18:15.525 "status": "finished", 00:18:15.525 "queue_depth": 128, 00:18:15.525 "io_size": 4096, 00:18:15.525 "runtime": 2.007436, 00:18:15.525 "iops": 20336.887452451785, 00:18:15.525 "mibps": 79.44096661113979, 00:18:15.525 "io_failed": 0, 00:18:15.525 "io_timeout": 0, 00:18:15.525 "avg_latency_us": 6279.224413204921, 00:18:15.525 "min_latency_us": 3872.581818181818, 00:18:15.525 "max_latency_us": 15490.327272727272 00:18:15.525 } 00:18:15.525 ], 00:18:15.525 "core_count": 1 00:18:15.525 } 00:18:15.525 09:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:15.525 09:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:15.525 09:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:15.525 09:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:15.525 09:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:15.525 | select(.opcode=="crc32c") 00:18:15.525 | "\(.module_name) \(.executed)"' 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80421 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80421 ']' 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80421 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80421 00:18:15.525 killing process with pid 80421 00:18:15.525 Received shutdown signal, test time was about 2.000000 seconds 00:18:15.525 00:18:15.525 Latency(us) 00:18:15.525 [2024-10-08T09:24:07.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.525 [2024-10-08T09:24:07.208Z] =================================================================================================================== 00:18:15.525 [2024-10-08T09:24:07.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80421' 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80421 00:18:15.525 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80421 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80487 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80487 /var/tmp/bperf.sock 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80487 ']' 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:15.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.785 09:24:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:15.785 [2024-10-08 09:24:07.450905] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:15.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:15.785 Zero copy mechanism will not be used. 00:18:15.785 [2024-10-08 09:24:07.451015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80487 ] 00:18:16.044 [2024-10-08 09:24:07.590255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.044 [2024-10-08 09:24:07.691137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.977 09:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.977 09:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:16.977 09:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:16.977 09:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:16.977 09:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:17.235 [2024-10-08 09:24:08.832492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.235 09:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.235 09:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.801 nvme0n1 00:18:17.801 09:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:17.801 09:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:17.801 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:17.801 Zero copy mechanism will not be used. 00:18:17.801 Running I/O for 2 seconds... 00:18:19.673 5857.00 IOPS, 732.12 MiB/s [2024-10-08T09:24:11.356Z] 5878.50 IOPS, 734.81 MiB/s 00:18:19.673 Latency(us) 00:18:19.673 [2024-10-08T09:24:11.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.673 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:19.673 nvme0n1 : 2.00 5875.88 734.49 0.00 0.00 2717.47 2174.60 7417.48 00:18:19.673 [2024-10-08T09:24:11.356Z] =================================================================================================================== 00:18:19.673 [2024-10-08T09:24:11.356Z] Total : 5875.88 734.49 0.00 0.00 2717.47 2174.60 7417.48 00:18:19.673 { 00:18:19.673 "results": [ 00:18:19.673 { 00:18:19.673 "job": "nvme0n1", 00:18:19.673 "core_mask": "0x2", 00:18:19.673 "workload": "randwrite", 00:18:19.673 "status": "finished", 00:18:19.673 "queue_depth": 16, 00:18:19.673 "io_size": 131072, 00:18:19.673 "runtime": 2.003273, 00:18:19.673 "iops": 5875.884115644747, 00:18:19.673 "mibps": 734.4855144555934, 00:18:19.673 "io_failed": 0, 00:18:19.673 "io_timeout": 0, 00:18:19.673 "avg_latency_us": 2717.465593870915, 00:18:19.673 "min_latency_us": 2174.6036363636363, 00:18:19.673 "max_latency_us": 7417.483636363636 00:18:19.673 } 00:18:19.673 ], 00:18:19.673 "core_count": 1 00:18:19.673 } 00:18:19.673 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:19.673 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:19.673 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:19.673 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:19.673 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:19.673 | select(.opcode=="crc32c") 00:18:19.673 | "\(.module_name) \(.executed)"' 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80487 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80487 ']' 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80487 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80487 00:18:19.932 killing process with pid 80487 00:18:19.932 Received shutdown signal, test time was about 2.000000 seconds 00:18:19.932 00:18:19.932 Latency(us) 00:18:19.932 [2024-10-08T09:24:11.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.932 [2024-10-08T09:24:11.615Z] =================================================================================================================== 00:18:19.932 [2024-10-08T09:24:11.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80487' 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80487 00:18:19.932 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80487 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80262 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80262 ']' 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80262 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80262 00:18:20.191 killing process with pid 80262 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80262' 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80262 00:18:20.191 09:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80262 00:18:20.758 00:18:20.758 real 0m19.637s 00:18:20.758 user 0m37.447s 00:18:20.758 sys 0m5.798s 00:18:20.758 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.758 ************************************ 00:18:20.758 END TEST nvmf_digest_clean 00:18:20.758 ************************************ 00:18:20.758 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:20.758 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:20.759 ************************************ 00:18:20.759 START TEST nvmf_digest_error 00:18:20.759 ************************************ 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=80571 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 80571 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80571 ']' 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.759 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.759 [2024-10-08 09:24:12.275243] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:20.759 [2024-10-08 09:24:12.275348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.759 [2024-10-08 09:24:12.404868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.018 [2024-10-08 09:24:12.490906] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.018 [2024-10-08 09:24:12.490989] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.018 [2024-10-08 09:24:12.491009] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.018 [2024-10-08 09:24:12.491017] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.018 [2024-10-08 09:24:12.491024] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.018 [2024-10-08 09:24:12.491429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.018 [2024-10-08 09:24:12.579892] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.018 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.018 [2024-10-08 09:24:12.665302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.280 null0 00:18:21.280 [2024-10-08 09:24:12.728602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.280 [2024-10-08 09:24:12.752796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80595 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80595 /var/tmp/bperf.sock 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80595 ']' 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:21.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:21.280 09:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.280 [2024-10-08 09:24:12.822949] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:21.281 [2024-10-08 09:24:12.823089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80595 ] 00:18:21.540 [2024-10-08 09:24:12.963679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.540 [2024-10-08 09:24:13.063181] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.540 [2024-10-08 09:24:13.116270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.476 09:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.477 09:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:22.477 09:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:22.477 09:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:22.477 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:22.477 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.477 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:22.477 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.477 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.477 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.045 nvme0n1 00:18:23.045 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:23.045 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.045 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:23.045 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.045 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:23.045 09:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:23.045 Running I/O for 2 seconds... 00:18:23.045 [2024-10-08 09:24:14.650432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.045 [2024-10-08 09:24:14.650504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.045 [2024-10-08 09:24:14.650520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.045 [2024-10-08 09:24:14.664858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.045 [2024-10-08 09:24:14.664911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.045 [2024-10-08 09:24:14.664941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.045 [2024-10-08 09:24:14.679115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.045 [2024-10-08 09:24:14.679174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.045 [2024-10-08 09:24:14.679204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.045 [2024-10-08 09:24:14.693292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.045 [2024-10-08 09:24:14.693344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.045 [2024-10-08 09:24:14.693373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.045 [2024-10-08 09:24:14.707624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.045 [2024-10-08 09:24:14.707678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.045 [2024-10-08 09:24:14.707707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.045 [2024-10-08 09:24:14.721757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.045 [2024-10-08 09:24:14.721808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.045 [2024-10-08 09:24:14.721837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.736474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.736525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.736554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.750571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.750642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.750671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.764833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.764898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.764928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.778995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.779046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.779075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.793007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.793058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.793087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.807162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.807227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.807256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.821279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.821357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.821387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.835402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.835453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.835481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.849375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.849427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.849456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.863603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.863654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.863683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.877842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.877892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.877921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.893082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.893134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.893167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.908713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.908783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.908813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.923419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.923492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.923521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.937717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.937781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.937811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.951983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.952037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.952049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.966033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.966083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.966112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-08 09:24:14.980151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.305 [2024-10-08 09:24:14.980202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-08 09:24:14.980231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:14.994925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:14.994975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:14.995004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.009606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.009658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.009687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.023730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.023804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.023817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.037706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.037765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.037794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.051781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.051831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.051843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.065823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.065873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.065901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.079839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.079894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.079906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.093792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.093842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.093870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.107858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.107911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.107923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.121898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.121948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.121976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.135961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.136036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.136049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.150051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.150100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.150129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.164060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.164113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.164125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.178093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.178144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.565 [2024-10-08 09:24:15.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.565 [2024-10-08 09:24:15.192181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.565 [2024-10-08 09:24:15.192230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.566 [2024-10-08 09:24:15.192258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.566 [2024-10-08 09:24:15.206256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.566 [2024-10-08 09:24:15.206348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.566 [2024-10-08 09:24:15.206362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.566 [2024-10-08 09:24:15.220329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.566 [2024-10-08 09:24:15.220379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.566 [2024-10-08 09:24:15.220408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.566 [2024-10-08 09:24:15.234431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.566 [2024-10-08 09:24:15.234485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.566 [2024-10-08 09:24:15.234498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.249049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.249101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.825 [2024-10-08 09:24:15.249146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.263158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.263208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.825 [2024-10-08 09:24:15.263236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.277280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.277329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.825 [2024-10-08 09:24:15.277357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.291939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.291993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.825 [2024-10-08 09:24:15.292006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.307631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.307683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.825 [2024-10-08 09:24:15.307712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.322881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.322935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.825 [2024-10-08 09:24:15.322965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.337043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.337093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.825 [2024-10-08 09:24:15.337122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.825 [2024-10-08 09:24:15.351516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.825 [2024-10-08 09:24:15.351567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.351597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.365623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.365674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.365702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.379813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.379866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.379895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.393888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.393943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.393956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.407971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.408031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.408060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.421928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.421980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.421993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.435985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.436038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.436066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.450416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.450468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.450482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.464430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.464480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.464508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.478638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.478688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.478716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.492808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.492858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.492887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.826 [2024-10-08 09:24:15.507492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:23.826 [2024-10-08 09:24:15.507562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.826 [2024-10-08 09:24:15.507593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.085 [2024-10-08 09:24:15.521972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.085 [2024-10-08 09:24:15.522024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.085 [2024-10-08 09:24:15.522036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.085 [2024-10-08 09:24:15.536844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.085 [2024-10-08 09:24:15.536895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.085 [2024-10-08 09:24:15.536924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.085 [2024-10-08 09:24:15.558239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.085 [2024-10-08 09:24:15.558309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.085 [2024-10-08 09:24:15.558325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.085 [2024-10-08 09:24:15.573528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.085 [2024-10-08 09:24:15.573579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.085 [2024-10-08 09:24:15.573607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.085 [2024-10-08 09:24:15.590136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.085 [2024-10-08 09:24:15.590188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.085 [2024-10-08 09:24:15.590217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.085 [2024-10-08 09:24:15.607337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.085 [2024-10-08 09:24:15.607403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.085 [2024-10-08 09:24:15.607432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.085 17332.00 IOPS, 67.70 MiB/s [2024-10-08T09:24:15.769Z] [2024-10-08 09:24:15.626630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.626682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.626726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.642809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.642883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.642911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.659676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.659729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.659777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.676675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.676728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.676778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.693323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.693373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.693401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.709509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.709560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.709613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.726118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.726169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.726198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.741830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.741890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.086 [2024-10-08 09:24:15.757400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.086 [2024-10-08 09:24:15.757442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.086 [2024-10-08 09:24:15.757469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.773686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.773760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.789580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.789639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.789667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.805214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.805266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.805294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.820653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.820703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.820731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.836290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.836324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.836352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.851338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.851388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.851416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.866385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.866436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.866448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.881473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.881506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.881533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.345 [2024-10-08 09:24:15.896631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.345 [2024-10-08 09:24:15.896681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.345 [2024-10-08 09:24:15.896710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:15.912554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:15.912617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:15.912644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:15.928850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:15.928916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:15.928969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:15.944941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:15.945001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:15.945029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:15.960751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:15.960801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:15.960830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:15.977058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:15.977091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:15.977120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:15.991438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:15.991475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:15.991504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:16.006812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:16.006865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:16.006878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.346 [2024-10-08 09:24:16.020926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.346 [2024-10-08 09:24:16.020993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.346 [2024-10-08 09:24:16.021022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.038095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.038145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.038173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.055932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.055996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.056025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.072553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.072620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.072649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.090886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.090950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.090961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.107687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.107775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.107791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.123374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.123408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.123436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.138012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.138045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.138072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.152282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.152316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.152343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.166712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.166784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.166812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.180701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.180759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.180773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.194648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.194698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.194722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.208593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.208627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.208654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.222526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.222578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.222590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.236568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.605 [2024-10-08 09:24:16.236602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.605 [2024-10-08 09:24:16.236629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.605 [2024-10-08 09:24:16.250500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.606 [2024-10-08 09:24:16.250552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.606 [2024-10-08 09:24:16.250564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.606 [2024-10-08 09:24:16.264536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.606 [2024-10-08 09:24:16.264569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.606 [2024-10-08 09:24:16.264596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.606 [2024-10-08 09:24:16.278389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.606 [2024-10-08 09:24:16.278440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.606 [2024-10-08 09:24:16.278452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.293320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.293353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.293381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.307428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.307461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.307488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.322206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.322256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.322311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.337681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.337715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.337743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.355896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.355978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.356006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.373866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.373899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.373926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.390556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.390623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.390636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.406406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.406458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.406480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.422705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.422778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.422807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.437928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.437987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.438015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.453332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.453365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.453392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.469119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.469183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.469211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.484583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.484619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.484646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.499618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.499667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.499695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.514087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.514136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.514162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.528525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.528557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.528584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.865 [2024-10-08 09:24:16.543005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:24.865 [2024-10-08 09:24:16.543037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.865 [2024-10-08 09:24:16.543064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.125 [2024-10-08 09:24:16.566127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:25.125 [2024-10-08 09:24:16.566193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.125 [2024-10-08 09:24:16.566221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.125 [2024-10-08 09:24:16.581864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:25.125 [2024-10-08 09:24:16.581915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.125 [2024-10-08 09:24:16.581927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.125 [2024-10-08 09:24:16.597081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:25.125 [2024-10-08 09:24:16.597115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.125 [2024-10-08 09:24:16.597142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.125 [2024-10-08 09:24:16.612393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:25.125 [2024-10-08 09:24:16.612426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.125 [2024-10-08 09:24:16.612453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.125 16825.50 IOPS, 65.72 MiB/s [2024-10-08T09:24:16.808Z] [2024-10-08 09:24:16.628050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1da9a80) 00:18:25.125 [2024-10-08 09:24:16.628083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.125 [2024-10-08 09:24:16.628111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.125 00:18:25.125 Latency(us) 00:18:25.125 [2024-10-08T09:24:16.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.125 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:25.125 nvme0n1 : 2.01 16844.13 65.80 0.00 0.00 7592.88 6702.55 29193.31 00:18:25.125 [2024-10-08T09:24:16.808Z] =================================================================================================================== 00:18:25.125 [2024-10-08T09:24:16.808Z] Total : 16844.13 65.80 0.00 0.00 7592.88 6702.55 29193.31 00:18:25.125 { 00:18:25.125 "results": [ 00:18:25.125 { 00:18:25.125 "job": "nvme0n1", 00:18:25.125 "core_mask": "0x2", 00:18:25.125 "workload": "randread", 00:18:25.125 "status": "finished", 00:18:25.125 "queue_depth": 128, 00:18:25.125 "io_size": 4096, 00:18:25.125 "runtime": 2.005387, 00:18:25.125 "iops": 16844.13033494283, 00:18:25.125 "mibps": 65.79738412087043, 00:18:25.125 "io_failed": 0, 00:18:25.125 "io_timeout": 0, 00:18:25.125 "avg_latency_us": 7592.88202234309, 00:18:25.125 "min_latency_us": 6702.545454545455, 00:18:25.125 "max_latency_us": 29193.30909090909 00:18:25.125 } 00:18:25.125 ], 00:18:25.125 "core_count": 1 00:18:25.125 } 00:18:25.125 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:25.125 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:25.125 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:25.125 | .driver_specific 00:18:25.125 | .nvme_error 00:18:25.125 | .status_code 00:18:25.125 | .command_transient_transport_error' 00:18:25.125 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80595 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80595 ']' 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80595 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80595 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:25.384 killing process with pid 80595 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80595' 00:18:25.384 Received shutdown signal, test time was about 2.000000 seconds 00:18:25.384 00:18:25.384 Latency(us) 00:18:25.384 [2024-10-08T09:24:17.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.384 [2024-10-08T09:24:17.067Z] =================================================================================================================== 00:18:25.384 [2024-10-08T09:24:17.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80595 00:18:25.384 09:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80595 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80655 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80655 /var/tmp/bperf.sock 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80655 ']' 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.643 09:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.643 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:25.643 Zero copy mechanism will not be used. 00:18:25.643 [2024-10-08 09:24:17.240715] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:25.643 [2024-10-08 09:24:17.240853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80655 ] 00:18:25.902 [2024-10-08 09:24:17.375215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.902 [2024-10-08 09:24:17.457398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.902 [2024-10-08 09:24:17.511914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.839 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:27.408 nvme0n1 00:18:27.408 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:27.408 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.408 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.408 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.408 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:27.408 09:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:27.408 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:27.408 Zero copy mechanism will not be used. 00:18:27.408 Running I/O for 2 seconds... 00:18:27.408 [2024-10-08 09:24:18.944502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.944594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.944610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.949156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.949209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.949238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.953647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.953701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.953729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.958146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.958199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.958227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.962557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.962623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.962648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.967062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.967114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.967127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.971699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.971777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.971792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.976497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.976548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.976577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.981364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.981418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.981446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.986213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.986267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.986333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.991499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.991551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.991607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:18.996646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:18.996700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:18.996729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:19.001539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:19.001590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:19.001620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:19.006192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:19.006244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:19.006296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:19.011035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:19.011086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:19.011115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:19.015726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:19.015789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.408 [2024-10-08 09:24:19.015817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.408 [2024-10-08 09:24:19.020395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.408 [2024-10-08 09:24:19.020446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.020475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.025134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.025187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.025216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.029531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.029583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.029611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.034097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.034163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.034187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.038630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.038726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.043056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.043106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.043135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.047477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.047529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.047557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.051969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.052021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.052050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.056301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.056353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.056381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.060794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.060844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.060872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.065204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.065255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.065284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.069589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.069641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.069670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.073998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.074051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.074063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.078426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.078479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.078492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.082922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.082972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.083001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.409 [2024-10-08 09:24:19.087545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.409 [2024-10-08 09:24:19.087598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.409 [2024-10-08 09:24:19.087643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.092279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.092331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.092360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.097146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.097196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.097225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.101509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.101560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.101589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.105969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.106023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.106036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.110305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.110375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.110388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.114823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.114873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.114901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.119294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.119345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.119372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.123783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.123837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.123864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.128361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.128414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.128443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.133130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.133182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.133210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.137651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.137704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.137733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.142440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.142497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.142511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.147319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.147372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.152349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.152400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.152429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.157179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.157232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.157261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.161906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.161958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.161987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.166388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.166443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.166456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.171091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.171163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.171176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.175672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.175725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.175785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.180176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.180228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.180257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.184654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.670 [2024-10-08 09:24:19.184706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.670 [2024-10-08 09:24:19.184734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.670 [2024-10-08 09:24:19.189512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.189549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.189577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.194206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.194258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.194327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.198900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.198952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.198981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.203561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.203613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.203637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.208038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.208093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.208106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.212463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.212515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.212543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.217039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.217090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.217119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.221696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.221788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.221802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.226212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.226265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.226317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.230938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.230989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.231017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.235628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.235683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.235697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.240407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.240461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.240491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.245038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.245091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.245119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.249818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.249869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.249881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.254742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.254807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.254846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.259685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.259763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.259777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.264638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.264691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.264720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.269611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.269663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.269693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.274243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.274348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.274362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.278962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.279011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.279040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.283836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.283878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.283891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.288319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.288372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.288400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.292897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.292949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.292985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.297597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.297651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.297680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.302241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.302322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.302337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.306807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.306857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.306886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.311358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.311410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.311438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.316272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.316340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.316370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.671 [2024-10-08 09:24:19.320891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.671 [2024-10-08 09:24:19.320944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.671 [2024-10-08 09:24:19.320956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.672 [2024-10-08 09:24:19.325412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.672 [2024-10-08 09:24:19.325462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.672 [2024-10-08 09:24:19.325490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.672 [2024-10-08 09:24:19.329972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.672 [2024-10-08 09:24:19.330024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.672 [2024-10-08 09:24:19.330053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.672 [2024-10-08 09:24:19.334649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.672 [2024-10-08 09:24:19.334702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.672 [2024-10-08 09:24:19.334730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.672 [2024-10-08 09:24:19.339218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.672 [2024-10-08 09:24:19.339269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.672 [2024-10-08 09:24:19.339297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.672 [2024-10-08 09:24:19.343716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.672 [2024-10-08 09:24:19.343776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.672 [2024-10-08 09:24:19.343805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.672 [2024-10-08 09:24:19.348549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.672 [2024-10-08 09:24:19.348600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.672 [2024-10-08 09:24:19.348628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.353235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.353303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.353332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.358098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.358162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.358188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.362681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.362771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.362800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.367520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.367571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.367599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.372023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.372074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.372102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.376495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.376546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.376574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.381373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.381425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.381454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.386093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.386161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.386189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.391027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.391080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.391125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.395866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.395901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.395930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.400941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.400979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.401009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.405691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.405785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.405800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.410674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.410726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.410777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.415728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.415822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.415835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.420409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.420460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.420488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.425106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.425158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.425186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.429556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.429623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.429651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.434143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.434194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.434222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.942 [2024-10-08 09:24:19.438619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.942 [2024-10-08 09:24:19.438671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.942 [2024-10-08 09:24:19.438683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.443126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.443179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.443209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.447594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.447645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.447674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.451965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.452018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.452030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.456296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.456347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.456375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.461096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.461146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.461174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.465732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.465795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.465824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.470342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.470394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.470407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.474846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.474896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.474925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.479384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.479435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.479463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.483803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.483850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.483862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.488110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.488177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.488205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.492564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.492615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.492644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.497081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.497131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.497160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.501421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.501473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.501501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.505872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.505921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.505950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.510232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.510306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.510319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.514711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.514786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.514815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.519065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.519114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.519143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.523387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.523438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.523466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.527802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.527851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.527863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.532168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.532218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.532246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.536569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.536619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.536648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.541070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.541123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.541152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.545437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.545488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.545518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.549890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.549943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.549971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.554344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.554397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.554409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.559229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.559301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.559346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.563971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.943 [2024-10-08 09:24:19.564029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.943 [2024-10-08 09:24:19.564044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.943 [2024-10-08 09:24:19.568555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.568611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.568641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.573416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.573457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.573472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.578099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.578153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.578183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.582687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.582783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.582797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.587339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.587392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.587421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.591844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.591897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.591909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.596305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.596357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.596386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.600888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.600939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.600967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.605724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.605786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.605816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.610383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.610422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.610435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.944 [2024-10-08 09:24:19.615108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:27.944 [2024-10-08 09:24:19.615163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.944 [2024-10-08 09:24:19.615192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.619708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.619789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.619804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.624403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.624457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.624486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.629042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.629093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.629122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.633575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.633626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.633654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.637974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.638025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.638053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.642455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.642495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.642509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.647040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.647090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.647119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.651389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.651440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.651469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.655864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.655916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.655928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.660255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.660306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.660334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.664796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.664846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.664858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.669269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.669304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.669332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.673679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.673730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.673771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.677970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.678004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.678032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.682411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.682465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.682479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.686866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.686915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.686943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.219 [2024-10-08 09:24:19.691318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.219 [2024-10-08 09:24:19.691369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.219 [2024-10-08 09:24:19.691397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.695726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.695784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.695813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.700099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.700136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.700164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.704520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.704554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.704582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.708852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.708903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.708915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.713200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.713235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.713262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.717559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.717593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.717621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.721857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.721889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.721916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.726099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.726133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.726161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.730403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.730454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.730466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.734791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.734843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.734870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.739128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.739162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.739189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.743378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.743411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.743440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.747722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.747764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.747792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.751970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.752003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.752030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.756213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.756247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.756275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.760519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.760554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.760581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.764817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.764868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.764880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.769064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.769115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.769127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.773356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.773391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.773418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.777621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.777655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.777682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.781853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.781886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.781914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.786055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.786088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.786115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.790361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.790397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.790409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.794755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.794814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.794842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.799174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.799208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.799235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.803644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.803677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.803705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.808197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.808231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.808258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.812545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.220 [2024-10-08 09:24:19.812580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.220 [2024-10-08 09:24:19.812607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.220 [2024-10-08 09:24:19.817009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.817060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.817072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.821222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.821256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.821283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.825726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.825792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.825837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.830422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.830477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.830490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.834963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.835014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.835042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.839425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.839476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.839505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.843888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.843936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.843964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.848192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.848241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.848269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.852488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.852521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.852548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.856832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.856865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.856893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.861140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.861174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.861201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.865449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.865483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.865511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.869752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.869798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.869810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.874039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.874090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.874101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.878356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.878392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.878404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.882646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.882737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.887003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.887035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.887063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.891255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.891289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.891316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.895539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.895573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.895600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.221 [2024-10-08 09:24:19.900053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.221 [2024-10-08 09:24:19.900087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.221 [2024-10-08 09:24:19.900114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.904488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.904522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.904550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.909226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.909260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.909288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.913574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.913608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.913635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.917955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.918006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.918018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.922255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.922313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.922326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.926621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.926671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.926715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.931018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.931051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.931078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.935353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.935387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.935415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 6805.00 IOPS, 850.62 MiB/s [2024-10-08T09:24:20.164Z] [2024-10-08 09:24:19.941031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.941067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.941095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.945362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.945397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.945424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.949713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.949776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.949789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.954056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.954090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.954117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.958414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.958465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.958477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.962756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.962800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.962827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.967168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.967202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.967230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.971656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.971690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.971718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.976102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.976135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.976163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.980501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.980534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.980562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.984972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.985024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.985036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.989340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.989374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.989402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.993701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.993762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.993777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:19.998354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:19.998419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:19.998433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.003098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:20.003149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:20.003177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.007986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:20.008058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:20.008087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.013145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:20.013199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:20.013228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.018544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:20.018599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:20.018615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.023779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:20.023861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:20.023906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.029021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:20.029083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:20.029112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.034593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.481 [2024-10-08 09:24:20.034634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.481 [2024-10-08 09:24:20.034658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.481 [2024-10-08 09:24:20.040051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.040101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.040129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.044858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.044908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.044936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.049252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.049302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.049330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.053650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.053700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.053728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.058063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.058113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.058141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.062510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.062562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.062575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.066966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.067018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.067030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.071333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.071382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.071410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.075741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.075801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.075829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.080097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.080146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.080174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.084599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.084649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.084677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.089449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.089501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.089531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.093936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.093986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.094014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.098264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.098343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.098356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.102791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.102847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.102859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.107254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.107305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.107332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.111626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.111676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.111704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.116059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.116109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.116137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.120371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.120422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.120451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.124722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.124780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.124809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.129174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.129225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.129252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.133593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.133643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.133671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.137921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.137970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.137998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.142297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.142365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.142377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.146620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.146670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.146695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.150993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.151045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.151057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.155366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.155415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.155443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.482 [2024-10-08 09:24:20.159998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.482 [2024-10-08 09:24:20.160048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.482 [2024-10-08 09:24:20.160060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.164682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.164757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.164771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.169362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.169412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.169440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.173801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.173849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.173878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.178149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.178199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.178227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.182508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.182562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.182576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.186948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.186996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.187024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.191384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.191435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.191463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.195804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.195860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.195872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.200198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.200248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.200276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.204690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.204766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.204780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.209076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.209126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.209154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.213480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.213529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.213557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.217857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.217906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.217934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.222198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.222249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.222300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.226645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.226711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.226739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.231013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.231062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.231090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.235466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.235501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.235533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.239817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.239864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.239875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.244303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.244352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.244380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.248960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.249011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.249039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.253303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.253353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.253382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.257648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.257698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.257725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.262000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.262050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.262077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.266412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.266450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.266464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.743 [2024-10-08 09:24:20.270993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.743 [2024-10-08 09:24:20.271043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.743 [2024-10-08 09:24:20.271071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.275386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.275435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.275463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.279711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.279789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.279802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.284121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.284186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.284214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.288539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.288589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.288617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.293141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.293190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.293218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.297616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.297666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.297694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.302023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.302073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.302100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.306368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.306406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.306419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.310840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.310889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.310916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.315219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.315270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.315298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.319594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.319645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.319673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.324219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.324270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.324298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.328867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.328918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.328947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.333478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.333529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.333559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.338364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.338404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.338418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.343126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.343193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.343220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.348032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.348084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.348127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.352838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.352889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.352917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.357372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.357423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.357451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.361891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.361943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.361955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.366511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.366566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.366579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.371038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.371091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.371119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.375427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.375478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.375507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.379992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.380042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.380070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.384358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.384408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.384437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.388807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.388856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.388884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.393333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.393384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.393412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.397771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.397819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.744 [2024-10-08 09:24:20.397831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.744 [2024-10-08 09:24:20.402200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.744 [2024-10-08 09:24:20.402268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.745 [2024-10-08 09:24:20.402322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.745 [2024-10-08 09:24:20.407004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.745 [2024-10-08 09:24:20.407043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.745 [2024-10-08 09:24:20.407056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.745 [2024-10-08 09:24:20.411668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.745 [2024-10-08 09:24:20.411735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.745 [2024-10-08 09:24:20.411774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.745 [2024-10-08 09:24:20.416402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.745 [2024-10-08 09:24:20.416454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.745 [2024-10-08 09:24:20.416483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.745 [2024-10-08 09:24:20.421386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:28.745 [2024-10-08 09:24:20.421427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.745 [2024-10-08 09:24:20.421454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.426347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.426386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.426401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.431465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.431533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.431561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.436670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.436723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.436781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.441883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.441934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.441969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.447024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.447077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.447121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.452110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.452161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.452173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.457030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.457084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.457096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.462020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.462081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.462112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.467392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.467431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.472357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.472394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.472423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.477152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.477206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.477234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.482129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.482182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.482195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.487167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.005 [2024-10-08 09:24:20.487219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.005 [2024-10-08 09:24:20.487248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.005 [2024-10-08 09:24:20.492030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.492082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.492111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.497034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.497090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.497113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.501929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.501980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.502009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.506603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.506685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.506713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.511218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.511254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.511283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.515859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.515909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.515938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.520311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.520363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.520391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.525076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.525128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.525141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.529618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.529670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.529698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.534094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.534146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.534174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.538838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.538888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.538916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.543354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.543405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.543434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.547982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.548050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.548078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.552953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.553025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.553038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.557587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.557637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.557665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.562264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.562345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.562358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.566798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.566858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.566886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.571399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.571449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.571476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.576118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.576173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.576201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.580665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.580716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.580745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.585168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.585219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.585247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.589751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.589814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.589844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.594253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.594312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.594325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.598798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.598858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.598886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.603298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.603348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.603376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.607760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.607810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.607838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.612180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.612232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.006 [2024-10-08 09:24:20.612260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.006 [2024-10-08 09:24:20.616604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.006 [2024-10-08 09:24:20.616655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.616683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.621066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.621118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.621131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.625518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.625568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.625596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.630003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.630055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.630084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.634458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.634497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.634510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.638885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.638935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.638963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.643373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.643424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.643452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.647818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.647868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.647896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.652332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.652383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.652412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.657184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.657237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.657267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.661738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.661801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.661829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.666165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.666216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.666243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.670665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.670718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.670773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.675350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.675400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.675427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.679795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.679873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.007 [2024-10-08 09:24:20.684358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.007 [2024-10-08 09:24:20.684427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.007 [2024-10-08 09:24:20.684455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.267 [2024-10-08 09:24:20.689074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.267 [2024-10-08 09:24:20.689126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.267 [2024-10-08 09:24:20.689169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.267 [2024-10-08 09:24:20.693777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.267 [2024-10-08 09:24:20.693839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.267 [2024-10-08 09:24:20.693884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.267 [2024-10-08 09:24:20.698170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.698221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.698249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.702684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.702761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.702775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.707253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.707305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.707333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.711901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.711951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.711980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.716403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.716455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.716483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.720917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.720970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.720983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.725367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.725418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.725447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.729845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.729895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.729923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.734451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.734506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.734520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.739048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.739099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.739128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.743583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.743650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.743679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.748424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.748474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.748502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.753681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.753719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.753773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.758945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.758996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.759024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.763935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.764002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.764030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.769405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.769477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.769508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.774768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.774818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.774842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.779989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.780087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.780126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.785364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.785414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.785441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.790736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.790810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.790823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.796141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.796177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.796205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.801406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.801457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.801486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.806553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.806634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.806661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.811573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.811661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.811691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.816670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.816720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.816757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.821660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.821712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.821741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.826591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.826656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.826691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.831399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.268 [2024-10-08 09:24:20.831450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.268 [2024-10-08 09:24:20.831478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.268 [2024-10-08 09:24:20.836376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.836427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.836455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.841521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.841573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.841602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.846718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.846805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.846818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.851874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.851926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.851954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.857086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.857135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.857164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.862467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.862521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.862535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.867704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.867768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.867798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.872704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.872766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.872796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.877873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.877925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.877955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.883052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.883104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.883116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.887993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.888061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.888089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.893013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.893093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.893121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.897886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.897936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.897992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.902703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.902782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.902796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.907380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.907431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.907459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.911974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.912041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.912070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.916656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.916725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.916755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.921250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.921303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.921332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.925830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.925880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.925908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.930245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.930320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.930334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.934776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.934845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.269 [2024-10-08 09:24:20.939251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f140e0) 00:18:29.269 [2024-10-08 09:24:20.939302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.269 [2024-10-08 09:24:20.939330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.269 6727.00 IOPS, 840.88 MiB/s 00:18:29.269 Latency(us) 00:18:29.269 [2024-10-08T09:24:20.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.269 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:29.269 nvme0n1 : 2.00 6724.59 840.57 0.00 0.00 2376.38 2010.76 5749.29 00:18:29.269 [2024-10-08T09:24:20.952Z] =================================================================================================================== 00:18:29.269 [2024-10-08T09:24:20.952Z] Total : 6724.59 840.57 0.00 0.00 2376.38 2010.76 5749.29 00:18:29.269 { 00:18:29.269 "results": [ 00:18:29.269 { 00:18:29.269 "job": "nvme0n1", 00:18:29.269 "core_mask": "0x2", 00:18:29.269 "workload": "randread", 00:18:29.269 "status": "finished", 00:18:29.269 "queue_depth": 16, 00:18:29.269 "io_size": 131072, 00:18:29.269 "runtime": 2.003096, 00:18:29.269 "iops": 6724.590334162716, 00:18:29.269 "mibps": 840.5737917703395, 00:18:29.269 "io_failed": 0, 00:18:29.269 "io_timeout": 0, 00:18:29.269 "avg_latency_us": 2376.383261388945, 00:18:29.269 "min_latency_us": 2010.7636363636364, 00:18:29.269 "max_latency_us": 5749.294545454545 00:18:29.269 } 00:18:29.269 ], 00:18:29.269 "core_count": 1 00:18:29.269 } 00:18:29.528 09:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:29.529 09:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:29.529 09:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:29.529 09:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:29.529 | .driver_specific 00:18:29.529 | .nvme_error 00:18:29.529 | .status_code 00:18:29.529 | .command_transient_transport_error' 00:18:29.529 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 434 > 0 )) 00:18:29.529 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80655 00:18:29.529 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80655 ']' 00:18:29.529 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80655 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80655 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:29.788 killing process with pid 80655 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80655' 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80655 00:18:29.788 Received shutdown signal, test time was about 2.000000 seconds 00:18:29.788 00:18:29.788 Latency(us) 00:18:29.788 [2024-10-08T09:24:21.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.788 [2024-10-08T09:24:21.471Z] =================================================================================================================== 00:18:29.788 [2024-10-08T09:24:21.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.788 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80655 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80716 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80716 /var/tmp/bperf.sock 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80716 ']' 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.047 09:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:30.047 [2024-10-08 09:24:21.542872] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:30.047 [2024-10-08 09:24:21.542995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80716 ] 00:18:30.047 [2024-10-08 09:24:21.678318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.306 [2024-10-08 09:24:21.781171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.306 [2024-10-08 09:24:21.838770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.243 09:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.810 nvme0n1 00:18:31.810 09:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:31.810 09:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.810 09:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.810 09:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.810 09:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:31.810 09:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:31.810 Running I/O for 2 seconds... 00:18:31.810 [2024-10-08 09:24:23.492921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:32.107 [2024-10-08 09:24:23.495917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.495963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.510457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198feb58 00:18:32.107 [2024-10-08 09:24:23.513182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.513221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.528080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fe2e8 00:18:32.107 [2024-10-08 09:24:23.531004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.531042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.546194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fda78 00:18:32.107 [2024-10-08 09:24:23.549046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.549084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.563988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fd208 00:18:32.107 [2024-10-08 09:24:23.566729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.566777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.581344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fc998 00:18:32.107 [2024-10-08 09:24:23.584129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.584168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.599014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fc128 00:18:32.107 [2024-10-08 09:24:23.601780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.601820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.616650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fb8b8 00:18:32.107 [2024-10-08 09:24:23.619289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.619329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.634290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fb048 00:18:32.107 [2024-10-08 09:24:23.637041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.637079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.652161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fa7d8 00:18:32.107 [2024-10-08 09:24:23.654768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.654806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.669474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f9f68 00:18:32.107 [2024-10-08 09:24:23.672160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.672199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.687044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f96f8 00:18:32.107 [2024-10-08 09:24:23.689674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.689712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.704720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f8e88 00:18:32.107 [2024-10-08 09:24:23.707378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.707416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.722330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f8618 00:18:32.107 [2024-10-08 09:24:23.724900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.724938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.740453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f7da8 00:18:32.107 [2024-10-08 09:24:23.743038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.107 [2024-10-08 09:24:23.743077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:32.107 [2024-10-08 09:24:23.758177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f7538 00:18:32.108 [2024-10-08 09:24:23.760631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.108 [2024-10-08 09:24:23.760668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:32.108 [2024-10-08 09:24:23.776783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f6cc8 00:18:32.108 [2024-10-08 09:24:23.779564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.108 [2024-10-08 09:24:23.779601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.795036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f6458 00:18:32.366 [2024-10-08 09:24:23.797567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.797605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.812808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f5be8 00:18:32.366 [2024-10-08 09:24:23.815201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.815239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.830444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f5378 00:18:32.366 [2024-10-08 09:24:23.832810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.832854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.848032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f4b08 00:18:32.366 [2024-10-08 09:24:23.850426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.850471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.865583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f4298 00:18:32.366 [2024-10-08 09:24:23.868034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.868071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.883300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f3a28 00:18:32.366 [2024-10-08 09:24:23.885608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.885644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.900799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f31b8 00:18:32.366 [2024-10-08 09:24:23.903128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.903166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.918430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f2948 00:18:32.366 [2024-10-08 09:24:23.920662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.920700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.936050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f20d8 00:18:32.366 [2024-10-08 09:24:23.938265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.938317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.953698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f1868 00:18:32.366 [2024-10-08 09:24:23.955907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.955955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.971169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f0ff8 00:18:32.366 [2024-10-08 09:24:23.973411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.973449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:23.988890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f0788 00:18:32.366 [2024-10-08 09:24:23.991122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.366 [2024-10-08 09:24:23.991160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:32.366 [2024-10-08 09:24:24.006494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eff18 00:18:32.367 [2024-10-08 09:24:24.008800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.367 [2024-10-08 09:24:24.008836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:32.367 [2024-10-08 09:24:24.024130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ef6a8 00:18:32.367 [2024-10-08 09:24:24.026301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.367 [2024-10-08 09:24:24.026341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:32.367 [2024-10-08 09:24:24.041813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eee38 00:18:32.367 [2024-10-08 09:24:24.044021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.367 [2024-10-08 09:24:24.044060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.059566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ee5c8 00:18:32.625 [2024-10-08 09:24:24.061763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.061809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.076993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198edd58 00:18:32.625 [2024-10-08 09:24:24.079276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.079315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.094584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ed4e8 00:18:32.625 [2024-10-08 09:24:24.096696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.096742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.111995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ecc78 00:18:32.625 [2024-10-08 09:24:24.114090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.114130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.129511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ec408 00:18:32.625 [2024-10-08 09:24:24.131571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.131608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.145975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ebb98 00:18:32.625 [2024-10-08 09:24:24.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.147943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.161008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eb328 00:18:32.625 [2024-10-08 09:24:24.162598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.162678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.175629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eaab8 00:18:32.625 [2024-10-08 09:24:24.177277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.177325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.190552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ea248 00:18:32.625 [2024-10-08 09:24:24.192380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.192428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.206962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e99d8 00:18:32.625 [2024-10-08 09:24:24.208872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.208922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.223997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e9168 00:18:32.625 [2024-10-08 09:24:24.225866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.225917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.240874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e88f8 00:18:32.625 [2024-10-08 09:24:24.242508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.625 [2024-10-08 09:24:24.242544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:32.625 [2024-10-08 09:24:24.256025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e8088 00:18:32.625 [2024-10-08 09:24:24.257508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.626 [2024-10-08 09:24:24.257556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:32.626 [2024-10-08 09:24:24.270083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e7818 00:18:32.626 [2024-10-08 09:24:24.271496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.626 [2024-10-08 09:24:24.271543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:32.626 [2024-10-08 09:24:24.284028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e6fa8 00:18:32.626 [2024-10-08 09:24:24.285335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.626 [2024-10-08 09:24:24.285383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:32.626 [2024-10-08 09:24:24.297695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e6738 00:18:32.626 [2024-10-08 09:24:24.299080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.626 [2024-10-08 09:24:24.299130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:32.884 [2024-10-08 09:24:24.311559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e5ec8 00:18:32.884 [2024-10-08 09:24:24.312919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.884 [2024-10-08 09:24:24.312968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.884 [2024-10-08 09:24:24.325370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e5658 00:18:32.884 [2024-10-08 09:24:24.326776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.884 [2024-10-08 09:24:24.326849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:32.884 [2024-10-08 09:24:24.339366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e4de8 00:18:32.884 [2024-10-08 09:24:24.340719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.884 [2024-10-08 09:24:24.340794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:32.884 [2024-10-08 09:24:24.354070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e4578 00:18:32.884 [2024-10-08 09:24:24.355380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.355428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.367833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e3d08 00:18:32.885 [2024-10-08 09:24:24.369075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.369139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.381589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e3498 00:18:32.885 [2024-10-08 09:24:24.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.382925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.395270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e2c28 00:18:32.885 [2024-10-08 09:24:24.396496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.396543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.409018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e23b8 00:18:32.885 [2024-10-08 09:24:24.410229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.410299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.422750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e1b48 00:18:32.885 [2024-10-08 09:24:24.423908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.423955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.436405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e12d8 00:18:32.885 [2024-10-08 09:24:24.437538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.437588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.450165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e0a68 00:18:32.885 [2024-10-08 09:24:24.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.451408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.464256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e01f8 00:18:32.885 [2024-10-08 09:24:24.465354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.465402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:32.885 15308.00 IOPS, 59.80 MiB/s [2024-10-08T09:24:24.568Z] [2024-10-08 09:24:24.478030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198df988 00:18:32.885 [2024-10-08 09:24:24.479173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.479221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.491675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198df118 00:18:32.885 [2024-10-08 09:24:24.492823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.492879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.505940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198de8a8 00:18:32.885 [2024-10-08 09:24:24.507114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.507163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.520865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198de038 00:18:32.885 [2024-10-08 09:24:24.521966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.522018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.542139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198de038 00:18:32.885 [2024-10-08 09:24:24.544196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.544248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.885 [2024-10-08 09:24:24.555920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198de8a8 00:18:32.885 [2024-10-08 09:24:24.558030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.885 [2024-10-08 09:24:24.558077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.570684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198df118 00:18:33.144 [2024-10-08 09:24:24.573276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.573324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.586927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198df988 00:18:33.144 [2024-10-08 09:24:24.589592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.589648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.603596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e01f8 00:18:33.144 [2024-10-08 09:24:24.606052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.606126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.619990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e0a68 00:18:33.144 [2024-10-08 09:24:24.622469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.622507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.635633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e12d8 00:18:33.144 [2024-10-08 09:24:24.637750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.637816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.649919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e1b48 00:18:33.144 [2024-10-08 09:24:24.651925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.651972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.663884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e23b8 00:18:33.144 [2024-10-08 09:24:24.665772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.665819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.677603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e2c28 00:18:33.144 [2024-10-08 09:24:24.679532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.691417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e3498 00:18:33.144 [2024-10-08 09:24:24.693318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.693366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.705100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e3d08 00:18:33.144 [2024-10-08 09:24:24.707004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.707051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.719032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e4578 00:18:33.144 [2024-10-08 09:24:24.720875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.144 [2024-10-08 09:24:24.720926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:33.144 [2024-10-08 09:24:24.732945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e4de8 00:18:33.144 [2024-10-08 09:24:24.734815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.145 [2024-10-08 09:24:24.734870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:33.145 [2024-10-08 09:24:24.746737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e5658 00:18:33.145 [2024-10-08 09:24:24.748524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.145 [2024-10-08 09:24:24.748570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:33.145 [2024-10-08 09:24:24.760456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e5ec8 00:18:33.145 [2024-10-08 09:24:24.762227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.145 [2024-10-08 09:24:24.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.145 [2024-10-08 09:24:24.774065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e6738 00:18:33.145 [2024-10-08 09:24:24.775935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.145 [2024-10-08 09:24:24.775982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:33.145 [2024-10-08 09:24:24.787816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e6fa8 00:18:33.145 [2024-10-08 09:24:24.789593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.145 [2024-10-08 09:24:24.789641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:33.145 [2024-10-08 09:24:24.801523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e7818 00:18:33.145 [2024-10-08 09:24:24.803304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.145 [2024-10-08 09:24:24.803352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:33.145 [2024-10-08 09:24:24.815253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e8088 00:18:33.145 [2024-10-08 09:24:24.817107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.145 [2024-10-08 09:24:24.817154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.829170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e88f8 00:18:33.404 [2024-10-08 09:24:24.830925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.830976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.842834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e9168 00:18:33.404 [2024-10-08 09:24:24.844570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.844619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.856575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198e99d8 00:18:33.404 [2024-10-08 09:24:24.858248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.858330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.870141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ea248 00:18:33.404 [2024-10-08 09:24:24.871877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.871926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.883879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eaab8 00:18:33.404 [2024-10-08 09:24:24.885520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.885568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.897729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eb328 00:18:33.404 [2024-10-08 09:24:24.899436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.899484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.911535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ebb98 00:18:33.404 [2024-10-08 09:24:24.913156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.913204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.925346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ec408 00:18:33.404 [2024-10-08 09:24:24.927006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.927054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.939038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ecc78 00:18:33.404 [2024-10-08 09:24:24.940593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.940658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.953247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ed4e8 00:18:33.404 [2024-10-08 09:24:24.954967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.955014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.967134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198edd58 00:18:33.404 [2024-10-08 09:24:24.968786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.968833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.981655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ee5c8 00:18:33.404 [2024-10-08 09:24:24.983367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.983416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:24.996361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eee38 00:18:33.404 [2024-10-08 09:24:24.998154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:24.998202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:25.012342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198ef6a8 00:18:33.404 [2024-10-08 09:24:25.014222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:25.014269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:25.028512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198eff18 00:18:33.404 [2024-10-08 09:24:25.030354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:25.030405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:25.045411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f0788 00:18:33.404 [2024-10-08 09:24:25.047210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:25.047255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:25.060456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f0ff8 00:18:33.404 [2024-10-08 09:24:25.062165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:25.062212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:33.404 [2024-10-08 09:24:25.074863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f1868 00:18:33.404 [2024-10-08 09:24:25.076368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.404 [2024-10-08 09:24:25.076414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:33.666 [2024-10-08 09:24:25.089036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f20d8 00:18:33.666 [2024-10-08 09:24:25.090581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.666 [2024-10-08 09:24:25.090667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:33.666 [2024-10-08 09:24:25.103152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f2948 00:18:33.666 [2024-10-08 09:24:25.104626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.666 [2024-10-08 09:24:25.104673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:33.666 [2024-10-08 09:24:25.117743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f31b8 00:18:33.666 [2024-10-08 09:24:25.119383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.666 [2024-10-08 09:24:25.119431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:33.666 [2024-10-08 09:24:25.134529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f3a28 00:18:33.666 [2024-10-08 09:24:25.136235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.666 [2024-10-08 09:24:25.136286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:33.666 [2024-10-08 09:24:25.150599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f4298 00:18:33.666 [2024-10-08 09:24:25.152358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.666 [2024-10-08 09:24:25.152405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:33.666 [2024-10-08 09:24:25.165867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f4b08 00:18:33.666 [2024-10-08 09:24:25.167339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.666 [2024-10-08 09:24:25.167389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.180175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f5378 00:18:33.667 [2024-10-08 09:24:25.181569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.181616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.194258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f5be8 00:18:33.667 [2024-10-08 09:24:25.195720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.195772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.208580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f6458 00:18:33.667 [2024-10-08 09:24:25.210005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.210054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.222817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f6cc8 00:18:33.667 [2024-10-08 09:24:25.224265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.224313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.237920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f7538 00:18:33.667 [2024-10-08 09:24:25.239531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.239580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.253805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f7da8 00:18:33.667 [2024-10-08 09:24:25.255309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.255358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.268397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f8618 00:18:33.667 [2024-10-08 09:24:25.269944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.270002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.282857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f8e88 00:18:33.667 [2024-10-08 09:24:25.284190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.284228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.297136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f96f8 00:18:33.667 [2024-10-08 09:24:25.298449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.298499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.311847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198f9f68 00:18:33.667 [2024-10-08 09:24:25.313176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.313225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.326772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fa7d8 00:18:33.667 [2024-10-08 09:24:25.328004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.328053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:33.667 [2024-10-08 09:24:25.341262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fb048 00:18:33.667 [2024-10-08 09:24:25.342650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.667 [2024-10-08 09:24:25.342699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:33.930 [2024-10-08 09:24:25.356756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fb8b8 00:18:33.930 [2024-10-08 09:24:25.357995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.358053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:33.930 [2024-10-08 09:24:25.371990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fc128 00:18:33.930 [2024-10-08 09:24:25.373218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.373269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:33.930 [2024-10-08 09:24:25.387972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fc998 00:18:33.930 [2024-10-08 09:24:25.389382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.389432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:33.930 [2024-10-08 09:24:25.404477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fd208 00:18:33.930 [2024-10-08 09:24:25.405827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.405903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:33.930 [2024-10-08 09:24:25.420917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fda78 00:18:33.930 [2024-10-08 09:24:25.422544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.422595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:33.930 [2024-10-08 09:24:25.437332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fe2e8 00:18:33.930 [2024-10-08 09:24:25.438667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.438729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:33.930 [2024-10-08 09:24:25.452393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198feb58 00:18:33.930 [2024-10-08 09:24:25.453589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.453636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:33.930 16320.00 IOPS, 63.75 MiB/s [2024-10-08T09:24:25.613Z] [2024-10-08 09:24:25.473958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:33.930 [2024-10-08 09:24:25.476170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.930 [2024-10-08 09:24:25.476220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.930 00:18:33.930 Latency(us) 00:18:33.930 [2024-10-08T09:24:25.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.930 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.930 nvme0n1 : 2.01 16317.01 63.74 0.00 0.00 7837.95 2606.55 33840.41 00:18:33.930 [2024-10-08T09:24:25.613Z] =================================================================================================================== 00:18:33.930 [2024-10-08T09:24:25.613Z] Total : 16317.01 63.74 0.00 0.00 7837.95 2606.55 33840.41 00:18:33.930 { 00:18:33.930 "results": [ 00:18:33.930 { 00:18:33.930 "job": "nvme0n1", 00:18:33.930 "core_mask": "0x2", 00:18:33.930 "workload": "randwrite", 00:18:33.930 "status": "finished", 00:18:33.930 "queue_depth": 128, 00:18:33.930 "io_size": 4096, 00:18:33.930 "runtime": 2.008211, 00:18:33.930 "iops": 16317.010513337493, 00:18:33.930 "mibps": 63.73832231772458, 00:18:33.930 "io_failed": 0, 00:18:33.930 "io_timeout": 0, 00:18:33.930 "avg_latency_us": 7837.947954545454, 00:18:33.930 "min_latency_us": 2606.5454545454545, 00:18:33.930 "max_latency_us": 33840.40727272727 00:18:33.930 } 00:18:33.930 ], 00:18:33.930 "core_count": 1 00:18:33.930 } 00:18:33.931 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:33.931 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:33.931 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:33.931 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:33.931 | .driver_specific 00:18:33.931 | .nvme_error 00:18:33.931 | .status_code 00:18:33.931 | .command_transient_transport_error' 00:18:34.189 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 128 > 0 )) 00:18:34.189 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80716 00:18:34.189 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80716 ']' 00:18:34.189 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80716 00:18:34.189 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:34.190 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.190 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80716 00:18:34.190 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:34.190 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:34.190 killing process with pid 80716 00:18:34.190 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80716' 00:18:34.190 Received shutdown signal, test time was about 2.000000 seconds 00:18:34.190 00:18:34.190 Latency(us) 00:18:34.190 [2024-10-08T09:24:25.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.190 [2024-10-08T09:24:25.873Z] =================================================================================================================== 00:18:34.190 [2024-10-08T09:24:25.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.190 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80716 00:18:34.190 09:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80716 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80776 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80776 /var/tmp/bperf.sock 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80776 ']' 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.448 09:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:34.708 Zero copy mechanism will not be used. 00:18:34.708 [2024-10-08 09:24:26.144626] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:34.708 [2024-10-08 09:24:26.144755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80776 ] 00:18:34.708 [2024-10-08 09:24:26.283156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.708 [2024-10-08 09:24:26.384907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.966 [2024-10-08 09:24:26.442037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.534 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.534 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:35.534 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:35.534 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:35.794 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:35.794 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.794 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:35.794 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.794 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:35.794 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:36.362 nvme0n1 00:18:36.362 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:36.362 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.362 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:36.362 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.362 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:36.362 09:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:36.362 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:36.362 Zero copy mechanism will not be used. 00:18:36.362 Running I/O for 2 seconds... 00:18:36.362 [2024-10-08 09:24:27.914845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.915233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.915290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.920966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.921289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.921330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.927567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.927949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.927990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.933647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.934049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.934092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.939752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.940091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.940130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.946298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.946656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.946696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.952684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.953071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.953111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.959383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.959756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.959816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.966097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.966480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.966523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.972275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.972359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.972382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.978290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.978370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.978408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.984635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.984732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.984770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.991116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.991239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:27.997560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.362 [2024-10-08 09:24:27.997664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.362 [2024-10-08 09:24:27.997686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.362 [2024-10-08 09:24:28.003905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.363 [2024-10-08 09:24:28.004008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.363 [2024-10-08 09:24:28.004058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.363 [2024-10-08 09:24:28.009995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.363 [2024-10-08 09:24:28.010084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.363 [2024-10-08 09:24:28.010104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.363 [2024-10-08 09:24:28.016245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.363 [2024-10-08 09:24:28.016329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.363 [2024-10-08 09:24:28.016351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.363 [2024-10-08 09:24:28.022263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.363 [2024-10-08 09:24:28.022362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.363 [2024-10-08 09:24:28.022384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.363 [2024-10-08 09:24:28.028521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.363 [2024-10-08 09:24:28.028613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.363 [2024-10-08 09:24:28.028635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.363 [2024-10-08 09:24:28.034630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.363 [2024-10-08 09:24:28.034728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.363 [2024-10-08 09:24:28.034751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.363 [2024-10-08 09:24:28.041100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.363 [2024-10-08 09:24:28.041206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.363 [2024-10-08 09:24:28.041232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.047711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.047831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.047855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.053631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.053770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.053806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.060078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.060166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.060187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.066163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.066245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.066267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.072557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.072643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.072666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.079076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.079164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.079186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.085542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.085660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.092107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.092208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.092230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.098214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.098326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.098348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.104667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.104789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.104832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.110586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.110679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.110701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.117066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.117173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.117207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.123167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.123257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.123280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.129822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.129970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.129993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.136477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.136580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.136615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.142934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.143066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.143088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.149151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.149247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.149269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.155570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.623 [2024-10-08 09:24:28.155689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.623 [2024-10-08 09:24:28.155710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.623 [2024-10-08 09:24:28.161673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.161786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.161819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.167560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.167661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.167683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.174161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.174259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.174325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.180214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.180295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.180316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.186780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.186907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.186929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.193734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.193863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.193895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.200624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.200747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.200788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.207747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.207861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.207883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.215183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.215274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.215295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.222418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.222510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.222548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.229473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.229556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.229579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.236038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.236147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.236206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.242931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.243029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.243050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.249345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.249452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.249474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.255537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.255638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.255671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.261992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.262065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.262086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.268035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.268126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.268149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.274571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.274676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.274702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.281292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.281413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.281461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.287464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.287580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.287603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.294009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.294113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.294135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.624 [2024-10-08 09:24:28.300502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.624 [2024-10-08 09:24:28.300610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.624 [2024-10-08 09:24:28.300680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.306787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.306924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.306947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.313359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.313452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.313475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.319395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.319501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.326230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.326356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.326379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.332874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.332980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.333001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.339582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.339684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.339714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.346031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.346140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.346162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.352746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.352877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.352901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.358742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.358904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.358928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.365241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.365333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.365356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.371556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.371659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.371692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.377608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.377692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.377713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.383826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.383916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.383937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.390151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.390262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.390311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.396591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.396685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.883 [2024-10-08 09:24:28.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.883 [2024-10-08 09:24:28.402672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.883 [2024-10-08 09:24:28.402805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.402840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.409171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.409276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.409298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.415725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.415851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.415884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.422357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.422445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.422467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.428706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.428835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.428857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.435309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.435394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.435416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.441121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.441215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.441253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.447638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.447768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.447798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.454150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.454245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.454267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.460757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.460876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.460908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.467270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.467400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.467424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.473538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.473623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.473645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.480085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.480160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.480182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.486723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.486835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.486858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.493428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.493551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.493573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.499698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.499788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.499820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.505826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.505911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.505933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.512223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.512306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.512327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.518347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.518428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.518466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.524988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.525093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.525113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.531499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.531590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.531613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.538052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.538154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.538175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.544483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.544576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.544597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.550687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.550884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.557279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.557387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.557425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.884 [2024-10-08 09:24:28.563807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:36.884 [2024-10-08 09:24:28.563903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.884 [2024-10-08 09:24:28.563927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.570367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.570440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.570465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.576551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.576636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.576659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.582542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.582641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.582692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.589310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.589440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.589464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.595564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.595659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.595682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.602210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.602319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.602343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.608700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.608813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.608837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.615423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.615514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.615536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.621801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.621923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.621952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.627971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.628055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.628088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.634064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.634151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.634173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.640862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.640958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.640982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.646710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.646817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.646872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.653270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.653362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.653384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.659923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.660047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.660070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.666506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.666640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.666677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.672890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.672982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.673003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.679262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.679344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.679366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.685556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.685651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.685673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.691503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.691583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.691606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.697568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.697662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.697694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.704205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.704281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.704303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.710207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.710338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.710360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.716854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.716930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.716952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.722955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.723040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.723063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.729380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.729467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.729489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.735598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.735679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.144 [2024-10-08 09:24:28.735700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.144 [2024-10-08 09:24:28.742223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.144 [2024-10-08 09:24:28.742335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.742358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.748755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.748857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.748878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.755584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.755710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.755743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.762064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.762167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.762204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.768572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.768656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.768677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.775081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.775171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.775193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.781156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.781239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.781260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.787372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.787455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.787477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.793349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.793464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.799345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.799430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.799453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.805861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.805961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.805982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.812131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.812241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.812262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.818888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.818982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.819004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.145 [2024-10-08 09:24:28.825184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.145 [2024-10-08 09:24:28.825262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.145 [2024-10-08 09:24:28.825285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.831306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.831385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.831424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.837511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.837591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.837613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.843866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.843938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.843976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.850228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.850366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.850403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.857290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.857415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.857437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.864115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.864249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.864271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.871278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.871358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.871385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.877637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.877719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.877740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.884478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.884564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.884585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.891305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.891390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.891412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.897999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.898075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.898113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.405 4781.00 IOPS, 597.62 MiB/s [2024-10-08T09:24:29.088Z] [2024-10-08 09:24:28.905253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.905338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.905361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.911949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.912042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.912064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.918363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.918437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.918459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.925021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.925128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.925164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.932021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.932112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.932158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.938970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.939070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.939092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.945525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.945623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.945645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.952276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.952379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.952415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.958394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.958472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.958495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.965099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.965193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.965216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.972066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.972165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.972199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.405 [2024-10-08 09:24:28.978578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.405 [2024-10-08 09:24:28.978707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-10-08 09:24:28.978729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:28.985460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:28.985548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:28.985570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:28.991997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:28.992100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:28.992122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:28.998610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:28.998753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:28.998776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.005194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.005295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.005315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.011911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.012010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.012032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.018793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.018885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.018907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.025502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.025599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.025620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.032114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.032217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.032270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.038752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.038885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.038907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.045182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.045282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.045305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.051863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.051977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.052014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.058111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.058204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.058225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.065131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.065237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.065267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.071587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.071710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.071731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.078615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.078735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.078759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.406 [2024-10-08 09:24:29.085006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.406 [2024-10-08 09:24:29.085118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-10-08 09:24:29.085142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.091536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.091632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.091656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.098251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.098346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.098370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.104701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.104832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.104855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.111320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.111445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.111467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.117509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.117592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.117613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.123970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.124055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.124078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.130151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.130250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.130297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.136147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.136259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.136280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.142193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.142326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.142349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.148575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.148663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.148684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.155023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.155119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.155142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.161325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.161412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.161437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.167576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.167710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.174104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.174220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.174242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.180330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.180428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.180449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.186365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.186474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.186497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.192452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.192547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.192568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.198827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.198922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.198943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.205407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.205512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.205533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.212053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.212141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.212165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.219202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.666 [2024-10-08 09:24:29.219312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.666 [2024-10-08 09:24:29.219334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.666 [2024-10-08 09:24:29.225947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.226077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.226098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.233432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.233545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.233567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.240758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.240926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.240949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.247837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.247962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.247983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.254500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.254606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.254628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.260709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.260823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.260845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.267195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.267324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.267345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.273935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.274025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.274046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.280167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.280249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.280271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.286235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.286382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.286403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.292846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.292958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.292980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.298858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.298973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.298995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.305029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.305129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.305150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.311425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.311516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.311538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.317993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.318182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.318203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.324314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.324404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.324426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.330755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.330909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.330931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.337411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.337510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.337532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.667 [2024-10-08 09:24:29.343924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.667 [2024-10-08 09:24:29.344014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.667 [2024-10-08 09:24:29.344037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.350138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.350228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.350252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.356339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.356430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.356452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.362544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.362651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.362673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.368792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.368891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.368913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.375042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.375151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.375173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.381701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.381811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.381833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.388155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.388267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.388288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.394341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.394442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.394467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.400600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.400704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.400725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.406692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.406781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.406803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.413279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.413388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.419895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.420014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.420039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.426056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.426141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.426163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.432052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.432125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.432147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.438613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.438729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.438751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.445449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.445581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.445602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.452126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.452252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.452274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.458114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.458238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.458259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.464421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.464512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.464535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.470464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.470545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.927 [2024-10-08 09:24:29.470581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.927 [2024-10-08 09:24:29.476982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.927 [2024-10-08 09:24:29.477080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.477117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.483121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.483232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.483254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.489736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.489842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.489863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.496338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.496438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.496460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.502441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.502516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.502539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.508891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.509000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.509021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.514960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.515052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.515073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.520952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.521045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.521067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.527033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.527125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.527147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.532986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.533098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.533120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.539506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.539593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.539615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.546020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.546126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.546147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.552528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.552638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.552660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.559191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.559296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.559319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.565718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.565822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.565843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.572245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.572344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.572382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.578612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.578727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.578764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.585295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.585388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.585409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.591311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.591393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.591416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.597199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.597291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.597313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.928 [2024-10-08 09:24:29.603205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:37.928 [2024-10-08 09:24:29.603316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.928 [2024-10-08 09:24:29.603338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.188 [2024-10-08 09:24:29.609622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.188 [2024-10-08 09:24:29.609733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.188 [2024-10-08 09:24:29.609780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.188 [2024-10-08 09:24:29.616181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.188 [2024-10-08 09:24:29.616260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.188 [2024-10-08 09:24:29.616285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.188 [2024-10-08 09:24:29.622412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.188 [2024-10-08 09:24:29.622498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.188 [2024-10-08 09:24:29.622522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.188 [2024-10-08 09:24:29.629128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.188 [2024-10-08 09:24:29.629260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.188 [2024-10-08 09:24:29.629286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.188 [2024-10-08 09:24:29.635636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.188 [2024-10-08 09:24:29.635785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.188 [2024-10-08 09:24:29.635810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.642396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.642487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.642511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.649166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.649279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.649302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.656000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.656090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.656128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.662584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.662699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.662722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.669248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.669358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.669380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.675714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.676204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.676227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.682955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.683039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.683062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.688821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.688910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.688932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.695358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.695621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.695654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.701563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.701641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.701664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.708070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.708194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.708217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.714395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.714477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.714500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.720735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.720867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.720890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.727015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.727097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.727118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.733229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.733307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.733329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.739673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.739948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.739972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.745952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.746219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.746537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.752109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.752417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.752851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.758789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.759106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.759391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.765712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.766006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.766391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.772376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.772646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.772911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.779182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.779486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.779765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.785923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.786048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.786070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.792485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.792597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.792620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.799078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.799159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.799181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.805388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.805473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.805495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.811829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.811927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.811951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.817729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.189 [2024-10-08 09:24:29.817818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.189 [2024-10-08 09:24:29.817840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.189 [2024-10-08 09:24:29.824249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.824341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.824363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.190 [2024-10-08 09:24:29.830849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.830948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.830971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.190 [2024-10-08 09:24:29.837389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.837494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.837517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.190 [2024-10-08 09:24:29.844491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.844756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.190 [2024-10-08 09:24:29.850597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.850696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.850717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.190 [2024-10-08 09:24:29.856926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.857030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.857052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.190 [2024-10-08 09:24:29.862867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.862967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.862989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.190 [2024-10-08 09:24:29.869428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.190 [2024-10-08 09:24:29.869519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.190 [2024-10-08 09:24:29.869558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.449 [2024-10-08 09:24:29.875862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.449 [2024-10-08 09:24:29.875965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.449 [2024-10-08 09:24:29.875991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.449 [2024-10-08 09:24:29.882389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.449 [2024-10-08 09:24:29.882466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.449 [2024-10-08 09:24:29.882491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.449 [2024-10-08 09:24:29.888804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.449 [2024-10-08 09:24:29.889077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.449 [2024-10-08 09:24:29.889100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.449 [2024-10-08 09:24:29.895501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.449 [2024-10-08 09:24:29.895601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.449 [2024-10-08 09:24:29.895624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.449 4783.50 IOPS, 597.94 MiB/s [2024-10-08T09:24:30.132Z] [2024-10-08 09:24:29.903299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b29230) with pdu=0x2000198fef90 00:18:38.449 [2024-10-08 09:24:29.903414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.449 [2024-10-08 09:24:29.903439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.449 00:18:38.449 Latency(us) 00:18:38.449 [2024-10-08T09:24:30.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.449 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:38.449 nvme0n1 : 2.00 4781.79 597.72 0.00 0.00 3338.99 2353.34 11856.06 00:18:38.449 [2024-10-08T09:24:30.132Z] =================================================================================================================== 00:18:38.449 [2024-10-08T09:24:30.132Z] Total : 4781.79 597.72 0.00 0.00 3338.99 2353.34 11856.06 00:18:38.449 { 00:18:38.449 "results": [ 00:18:38.449 { 00:18:38.449 "job": "nvme0n1", 00:18:38.449 "core_mask": "0x2", 00:18:38.449 "workload": "randwrite", 00:18:38.449 "status": "finished", 00:18:38.449 "queue_depth": 16, 00:18:38.449 "io_size": 131072, 00:18:38.449 "runtime": 2.004898, 00:18:38.449 "iops": 4781.789397764874, 00:18:38.449 "mibps": 597.7236747206092, 00:18:38.449 "io_failed": 0, 00:18:38.449 "io_timeout": 0, 00:18:38.449 "avg_latency_us": 3338.987484946471, 00:18:38.449 "min_latency_us": 2353.338181818182, 00:18:38.449 "max_latency_us": 11856.058181818182 00:18:38.449 } 00:18:38.449 ], 00:18:38.449 "core_count": 1 00:18:38.449 } 00:18:38.449 09:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:38.449 09:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:38.449 09:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:38.449 | .driver_specific 00:18:38.449 | .nvme_error 00:18:38.449 | .status_code 00:18:38.449 | .command_transient_transport_error' 00:18:38.449 09:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 309 > 0 )) 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80776 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80776 ']' 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80776 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80776 00:18:38.708 killing process with pid 80776 00:18:38.708 Received shutdown signal, test time was about 2.000000 seconds 00:18:38.708 00:18:38.708 Latency(us) 00:18:38.708 [2024-10-08T09:24:30.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.708 [2024-10-08T09:24:30.391Z] =================================================================================================================== 00:18:38.708 [2024-10-08T09:24:30.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80776' 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80776 00:18:38.708 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80776 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80571 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80571 ']' 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80571 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80571 00:18:38.967 killing process with pid 80571 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80571' 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80571 00:18:38.967 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80571 00:18:39.226 00:18:39.226 real 0m18.647s 00:18:39.226 user 0m35.968s 00:18:39.226 sys 0m5.772s 00:18:39.226 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.226 ************************************ 00:18:39.226 END TEST nvmf_digest_error 00:18:39.226 ************************************ 00:18:39.226 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:39.485 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:39.485 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:39.485 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:39.485 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:39.485 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.485 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:39.485 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.486 09:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.486 rmmod nvme_tcp 00:18:39.486 rmmod nvme_fabrics 00:18:39.486 rmmod nvme_keyring 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 80571 ']' 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 80571 00:18:39.486 Process with pid 80571 is not found 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 80571 ']' 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 80571 00:18:39.486 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80571) - No such process 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 80571 is not found' 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:39.486 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:39.745 00:18:39.745 real 0m39.491s 00:18:39.745 user 1m13.723s 00:18:39.745 sys 0m12.045s 00:18:39.745 ************************************ 00:18:39.745 END TEST nvmf_digest 00:18:39.745 ************************************ 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.745 ************************************ 00:18:39.745 START TEST nvmf_host_multipath 00:18:39.745 ************************************ 00:18:39.745 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:40.005 * Looking for test storage... 00:18:40.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.005 --rc genhtml_branch_coverage=1 00:18:40.005 --rc genhtml_function_coverage=1 00:18:40.005 --rc genhtml_legend=1 00:18:40.005 --rc geninfo_all_blocks=1 00:18:40.005 --rc geninfo_unexecuted_blocks=1 00:18:40.005 00:18:40.005 ' 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.005 --rc genhtml_branch_coverage=1 00:18:40.005 --rc genhtml_function_coverage=1 00:18:40.005 --rc genhtml_legend=1 00:18:40.005 --rc geninfo_all_blocks=1 00:18:40.005 --rc geninfo_unexecuted_blocks=1 00:18:40.005 00:18:40.005 ' 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.005 --rc genhtml_branch_coverage=1 00:18:40.005 --rc genhtml_function_coverage=1 00:18:40.005 --rc genhtml_legend=1 00:18:40.005 --rc geninfo_all_blocks=1 00:18:40.005 --rc geninfo_unexecuted_blocks=1 00:18:40.005 00:18:40.005 ' 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.005 --rc genhtml_branch_coverage=1 00:18:40.005 --rc genhtml_function_coverage=1 00:18:40.005 --rc genhtml_legend=1 00:18:40.005 --rc geninfo_all_blocks=1 00:18:40.005 --rc geninfo_unexecuted_blocks=1 00:18:40.005 00:18:40.005 ' 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:40.005 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:40.006 Cannot find device "nvmf_init_br" 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:40.006 Cannot find device "nvmf_init_br2" 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:40.006 Cannot find device "nvmf_tgt_br" 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.006 Cannot find device "nvmf_tgt_br2" 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:40.006 Cannot find device "nvmf_init_br" 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:40.006 Cannot find device "nvmf_init_br2" 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:40.006 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:40.265 Cannot find device "nvmf_tgt_br" 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:40.265 Cannot find device "nvmf_tgt_br2" 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:40.265 Cannot find device "nvmf_br" 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:40.265 Cannot find device "nvmf_init_if" 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:40.265 Cannot find device "nvmf_init_if2" 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.265 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:40.266 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:40.266 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.266 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:40.266 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.266 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.524 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:40.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:18:40.525 00:18:40.525 --- 10.0.0.3 ping statistics --- 00:18:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.525 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:40.525 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:40.525 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:18:40.525 00:18:40.525 --- 10.0.0.4 ping statistics --- 00:18:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.525 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:40.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:40.525 00:18:40.525 --- 10.0.0.1 ping statistics --- 00:18:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.525 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:40.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:40.525 00:18:40.525 --- 10.0.0.2 ping statistics --- 00:18:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.525 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # return 0 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:40.525 09:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:40.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # nvmfpid=81114 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # waitforlisten 81114 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 81114 ']' 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.525 09:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:40.525 [2024-10-08 09:24:32.072074] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:40.525 [2024-10-08 09:24:32.072152] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.525 [2024-10-08 09:24:32.196602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:40.783 [2024-10-08 09:24:32.288266] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.783 [2024-10-08 09:24:32.288365] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.783 [2024-10-08 09:24:32.288375] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.783 [2024-10-08 09:24:32.288383] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.783 [2024-10-08 09:24:32.288389] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.783 [2024-10-08 09:24:32.292762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.783 [2024-10-08 09:24:32.292827] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.783 [2024-10-08 09:24:32.371140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81114 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:41.718 [2024-10-08 09:24:33.347618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.718 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:42.286 Malloc0 00:18:42.286 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:42.559 09:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.831 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.831 [2024-10-08 09:24:34.481452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:42.831 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:43.090 [2024-10-08 09:24:34.749613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81170 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81170 /var/tmp/bdevperf.sock 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 81170 ']' 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.090 09:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:44.467 09:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.468 09:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:44.468 09:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:44.468 09:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:44.726 Nvme0n1 00:18:44.727 09:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:44.985 Nvme0n1 00:18:44.985 09:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:44.985 09:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:46.360 09:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:46.360 09:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:46.360 09:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:46.619 09:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:46.619 09:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81215 00:18:46.619 09:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81114 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:46.619 09:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.185 Attaching 4 probes... 00:18:53.185 @path[10.0.0.3, 4421]: 17232 00:18:53.185 @path[10.0.0.3, 4421]: 17872 00:18:53.185 @path[10.0.0.3, 4421]: 17376 00:18:53.185 @path[10.0.0.3, 4421]: 16865 00:18:53.185 @path[10.0.0.3, 4421]: 15076 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81215 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:53.185 09:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:53.753 09:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:53.753 09:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81114 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:53.753 09:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81334 00:18:53.753 09:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.317 Attaching 4 probes... 00:19:00.317 @path[10.0.0.3, 4420]: 15077 00:19:00.317 @path[10.0.0.3, 4420]: 15457 00:19:00.317 @path[10.0.0.3, 4420]: 16485 00:19:00.317 @path[10.0.0.3, 4420]: 15736 00:19:00.317 @path[10.0.0.3, 4420]: 15600 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81334 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:00.317 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:00.575 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:00.575 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81452 00:19:00.575 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:00.575 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81114 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.136 Attaching 4 probes... 00:19:07.136 @path[10.0.0.3, 4421]: 13923 00:19:07.136 @path[10.0.0.3, 4421]: 17077 00:19:07.136 @path[10.0.0.3, 4421]: 16538 00:19:07.136 @path[10.0.0.3, 4421]: 17019 00:19:07.136 @path[10.0.0.3, 4421]: 16736 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81452 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:07.136 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:07.394 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:07.394 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81564 00:19:07.394 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81114 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:07.394 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:13.970 Attaching 4 probes... 00:19:13.970 00:19:13.970 00:19:13.970 00:19:13.970 00:19:13.970 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81564 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:13.970 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:14.538 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:14.538 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81114 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:14.538 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81681 00:19:14.538 09:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:21.104 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:21.104 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:21.104 Attaching 4 probes... 00:19:21.104 @path[10.0.0.3, 4421]: 16273 00:19:21.104 @path[10.0.0.3, 4421]: 16515 00:19:21.104 @path[10.0.0.3, 4421]: 16982 00:19:21.104 @path[10.0.0.3, 4421]: 17691 00:19:21.104 @path[10.0.0.3, 4421]: 16953 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81681 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:21.104 [2024-10-08 09:25:12.554113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22769a0 is same with the state(6) to be set 00:19:21.104 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:22.040 09:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:22.040 09:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81800 00:19:22.040 09:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81114 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:22.040 09:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:28.606 Attaching 4 probes... 00:19:28.606 @path[10.0.0.3, 4420]: 18047 00:19:28.606 @path[10.0.0.3, 4420]: 18781 00:19:28.606 @path[10.0.0.3, 4420]: 19146 00:19:28.606 @path[10.0.0.3, 4420]: 18760 00:19:28.606 @path[10.0.0.3, 4420]: 19177 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81800 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:28.606 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:28.606 [2024-10-08 09:25:20.191376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:28.606 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:28.864 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:35.463 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:35.463 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81980 00:19:35.463 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81114 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:35.463 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:42.043 Attaching 4 probes... 00:19:42.043 @path[10.0.0.3, 4421]: 15574 00:19:42.043 @path[10.0.0.3, 4421]: 15869 00:19:42.043 @path[10.0.0.3, 4421]: 15838 00:19:42.043 @path[10.0.0.3, 4421]: 16056 00:19:42.043 @path[10.0.0.3, 4421]: 15865 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81980 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81170 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 81170 ']' 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 81170 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:42.043 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.044 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81170 00:19:42.044 killing process with pid 81170 00:19:42.044 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.044 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.044 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81170' 00:19:42.044 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 81170 00:19:42.044 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 81170 00:19:42.044 { 00:19:42.044 "results": [ 00:19:42.044 { 00:19:42.044 "job": "Nvme0n1", 00:19:42.044 "core_mask": "0x4", 00:19:42.044 "workload": "verify", 00:19:42.044 "status": "terminated", 00:19:42.044 "verify_range": { 00:19:42.044 "start": 0, 00:19:42.044 "length": 16384 00:19:42.044 }, 00:19:42.044 "queue_depth": 128, 00:19:42.044 "io_size": 4096, 00:19:42.044 "runtime": 56.143291, 00:19:42.044 "iops": 7197.102143513461, 00:19:42.044 "mibps": 28.113680248099456, 00:19:42.044 "io_failed": 0, 00:19:42.044 "io_timeout": 0, 00:19:42.044 "avg_latency_us": 17755.56670323858, 00:19:42.044 "min_latency_us": 210.38545454545454, 00:19:42.044 "max_latency_us": 7015926.69090909 00:19:42.044 } 00:19:42.044 ], 00:19:42.044 "core_count": 1 00:19:42.044 } 00:19:42.044 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81170 00:19:42.044 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:42.044 [2024-10-08 09:24:34.811626] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:19:42.044 [2024-10-08 09:24:34.811709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81170 ] 00:19:42.044 [2024-10-08 09:24:34.945753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.044 [2024-10-08 09:24:35.085241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.044 [2024-10-08 09:24:35.164609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.044 Running I/O for 90 seconds... 00:19:42.044 7579.00 IOPS, 29.61 MiB/s [2024-10-08T09:25:33.727Z] 8041.50 IOPS, 31.41 MiB/s [2024-10-08T09:25:33.727Z] 8305.00 IOPS, 32.44 MiB/s [2024-10-08T09:25:33.727Z] 8464.75 IOPS, 33.07 MiB/s [2024-10-08T09:25:33.727Z] 8512.60 IOPS, 33.25 MiB/s [2024-10-08T09:25:33.727Z] 8497.83 IOPS, 33.19 MiB/s [2024-10-08T09:25:33.727Z] 8366.14 IOPS, 32.68 MiB/s [2024-10-08T09:25:33.727Z] 8275.88 IOPS, 32.33 MiB/s [2024-10-08T09:25:33.727Z] [2024-10-08 09:24:45.173681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.173770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.173886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.173910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.173935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.173952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.173975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.173992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.044 [2024-10-08 09:24:45.174511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.174970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.174992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.175030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.175057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.175074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.175112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.175128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.175193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.175215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.175239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.044 [2024-10-08 09:24:45.175255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:42.044 [2024-10-08 09:24:45.175280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.175944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.175966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.175982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.176686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.176724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.176762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.176826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.176869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.176906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.176945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.176967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.176982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.177005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.045 [2024-10-08 09:24:45.177021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.177043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.177058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.177080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.177096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:42.045 [2024-10-08 09:24:45.177118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.045 [2024-10-08 09:24:45.177133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.177169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.177221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.177287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.177358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.177395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.177973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.177996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.046 [2024-10-08 09:24:45.178556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.178967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.178984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.179007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.179022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:42.046 [2024-10-08 09:24:45.179044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.046 [2024-10-08 09:24:45.179068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.179091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:45.179107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.179131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:45.179161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.179182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:45.179228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.180948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:45.180982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:45.181733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:45.181748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:42.047 8212.89 IOPS, 32.08 MiB/s [2024-10-08T09:25:33.730Z] 8157.20 IOPS, 31.86 MiB/s [2024-10-08T09:25:33.730Z] 8116.73 IOPS, 31.71 MiB/s [2024-10-08T09:25:33.730Z] 8128.33 IOPS, 31.75 MiB/s [2024-10-08T09:25:33.730Z] 8108.62 IOPS, 31.67 MiB/s [2024-10-08T09:25:33.730Z] 8087.14 IOPS, 31.59 MiB/s [2024-10-08T09:25:33.730Z] 8079.20 IOPS, 31.56 MiB/s [2024-10-08T09:25:33.730Z] [2024-10-08 09:24:51.839939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.047 [2024-10-08 09:24:51.840698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.840730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.840766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.840799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.840862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.840947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.840967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.840981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.841001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.841015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:42.047 [2024-10-08 09:24:51.841035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.047 [2024-10-08 09:24:51.841049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.841830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.841863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.841941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.841976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.842009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.842052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.842085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.842119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.048 [2024-10-08 09:24:51.842152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:42.048 [2024-10-08 09:24:51.842519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.048 [2024-10-08 09:24:51.842534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.842966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.842982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.843708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.843978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.843998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.844012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.844031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.844044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.844073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.844088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.844107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.844120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.844139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.049 [2024-10-08 09:24:51.844153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.844173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.844186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.844205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.049 [2024-10-08 09:24:51.844219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:42.049 [2024-10-08 09:24:51.844238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.050 [2024-10-08 09:24:51.844252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.844270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.050 [2024-10-08 09:24:51.844284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.844303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.050 [2024-10-08 09:24:51.844317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.844336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.050 [2024-10-08 09:24:51.844349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.844368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.050 [2024-10-08 09:24:51.844382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.050 [2024-10-08 09:24:51.845076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:51.845977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:51.845991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:42.050 7617.75 IOPS, 29.76 MiB/s [2024-10-08T09:25:33.733Z] 7615.18 IOPS, 29.75 MiB/s [2024-10-08T09:25:33.733Z] 7663.22 IOPS, 29.93 MiB/s [2024-10-08T09:25:33.733Z] 7697.79 IOPS, 30.07 MiB/s [2024-10-08T09:25:33.733Z] 7736.90 IOPS, 30.22 MiB/s [2024-10-08T09:25:33.733Z] 7765.62 IOPS, 30.33 MiB/s [2024-10-08T09:25:33.733Z] 7789.18 IOPS, 30.43 MiB/s [2024-10-08T09:25:33.733Z] [2024-10-08 09:24:59.009724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.009793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.009843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.009863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.009883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.009897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.009949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.009964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.009983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.009996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.010014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.010027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.010045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.010058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.010076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.010089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.010112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.010126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:42.050 [2024-10-08 09:24:59.010144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.050 [2024-10-08 09:24:59.010158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.010762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.010798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.010831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.010864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.010896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.010929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.010972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.010993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.051 [2024-10-08 09:24:59.011711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.051 [2024-10-08 09:24:59.011760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:42.051 [2024-10-08 09:24:59.011783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.011800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.011820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.011855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.011868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.011898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.011913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.011933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.011946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.011967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.011980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.052 [2024-10-08 09:24:59.012948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.012972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.012986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:42.052 [2024-10-08 09:24:59.013462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.052 [2024-10-08 09:24:59.013476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.013904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.013944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.013976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.013990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.053 [2024-10-08 09:24:59.014833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.014971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.014992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.015015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.015029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.015051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.015064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:42.053 [2024-10-08 09:24:59.015098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.053 [2024-10-08 09:24:59.015113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:24:59.015135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:24:59.015149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:24:59.015171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:24:59.015184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.054 7538.87 IOPS, 29.45 MiB/s [2024-10-08T09:25:33.737Z] 7224.75 IOPS, 28.22 MiB/s [2024-10-08T09:25:33.737Z] 6935.76 IOPS, 27.09 MiB/s [2024-10-08T09:25:33.737Z] 6669.00 IOPS, 26.05 MiB/s [2024-10-08T09:25:33.737Z] 6422.00 IOPS, 25.09 MiB/s [2024-10-08T09:25:33.737Z] 6192.64 IOPS, 24.19 MiB/s [2024-10-08T09:25:33.737Z] 5979.10 IOPS, 23.36 MiB/s [2024-10-08T09:25:33.737Z] 5974.70 IOPS, 23.34 MiB/s [2024-10-08T09:25:33.737Z] 6051.90 IOPS, 23.64 MiB/s [2024-10-08T09:25:33.737Z] 6121.28 IOPS, 23.91 MiB/s [2024-10-08T09:25:33.737Z] 6197.12 IOPS, 24.21 MiB/s [2024-10-08T09:25:33.737Z] 6274.62 IOPS, 24.51 MiB/s [2024-10-08T09:25:33.737Z] 6333.06 IOPS, 24.74 MiB/s [2024-10-08T09:25:33.737Z] [2024-10-08 09:25:12.554136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.554969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.554982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.555013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.555044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.054 [2024-10-08 09:25:12.555664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.555689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.054 [2024-10-08 09:25:12.555727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.054 [2024-10-08 09:25:12.555754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.555781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.555805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.555829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.555853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.555878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.555902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.555926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.555950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.555983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.555995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.055 [2024-10-08 09:25:12.556473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.055 [2024-10-08 09:25:12.556747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.055 [2024-10-08 09:25:12.556761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.556785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.556809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.556833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.556856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.556880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.556904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.556963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.556976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.556987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.056 [2024-10-08 09:25:12.557379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.557403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.557427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.557452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.557476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.557501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.557525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.056 [2024-10-08 09:25:12.557555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3dc0 is same with the state(6) to be set 00:19:42.056 [2024-10-08 09:25:12.557590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.056 [2024-10-08 09:25:12.557600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.056 [2024-10-08 09:25:12.557609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122368 len:8 PRP1 0x0 PRP2 0x0 00:19:42.056 [2024-10-08 09:25:12.557620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.056 [2024-10-08 09:25:12.557633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.056 [2024-10-08 09:25:12.557641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.056 [2024-10-08 09:25:12.557650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122792 len:8 PRP1 0x0 PRP2 0x0 00:19:42.056 [2024-10-08 09:25:12.557661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.557672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.557680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.557689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122800 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.557700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.557712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.557720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.557729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122808 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.557752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.557764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.557772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.557781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122816 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.557791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.557802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.557811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.557821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122824 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.557831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.557842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.557963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.557991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122832 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122840 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122848 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122856 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122864 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122872 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122888 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122896 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122904 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122912 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122920 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.057 [2024-10-08 09:25:12.558548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.057 [2024-10-08 09:25:12.558556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122928 len:8 PRP1 0x0 PRP2 0x0 00:19:42.057 [2024-10-08 09:25:12.558566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558636] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20a3dc0 was disconnected and freed. reset controller. 00:19:42.057 [2024-10-08 09:25:12.558777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.057 [2024-10-08 09:25:12.558800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.057 [2024-10-08 09:25:12.558831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.057 [2024-10-08 09:25:12.558855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.057 [2024-10-08 09:25:12.558877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.057 [2024-10-08 09:25:12.558910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.057 [2024-10-08 09:25:12.558929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2032f50 is same with the state(6) to be set 00:19:42.057 [2024-10-08 09:25:12.559870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.057 [2024-10-08 09:25:12.559907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2032f50 (9): Bad file descriptor 00:19:42.057 [2024-10-08 09:25:12.560286] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.057 [2024-10-08 09:25:12.560315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2032f50 with addr=10.0.0.3, port=4421 00:19:42.057 [2024-10-08 09:25:12.560330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2032f50 is same with the state(6) to be set 00:19:42.057 [2024-10-08 09:25:12.560385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2032f50 (9): Bad file descriptor 00:19:42.057 [2024-10-08 09:25:12.560416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:42.057 [2024-10-08 09:25:12.560431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:42.057 [2024-10-08 09:25:12.560443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:42.057 [2024-10-08 09:25:12.560470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.057 [2024-10-08 09:25:12.560486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.057 6391.17 IOPS, 24.97 MiB/s [2024-10-08T09:25:33.740Z] 6471.62 IOPS, 25.28 MiB/s [2024-10-08T09:25:33.740Z] 6537.84 IOPS, 25.54 MiB/s [2024-10-08T09:25:33.741Z] 6609.79 IOPS, 25.82 MiB/s [2024-10-08T09:25:33.741Z] 6681.80 IOPS, 26.10 MiB/s [2024-10-08T09:25:33.741Z] 6751.66 IOPS, 26.37 MiB/s [2024-10-08T09:25:33.741Z] 6819.48 IOPS, 26.64 MiB/s [2024-10-08T09:25:33.741Z] 6880.79 IOPS, 26.88 MiB/s [2024-10-08T09:25:33.741Z] 6934.59 IOPS, 27.09 MiB/s [2024-10-08T09:25:33.741Z] 6985.11 IOPS, 27.29 MiB/s [2024-10-08T09:25:33.741Z] [2024-10-08 09:25:22.616903] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:42.058 7026.54 IOPS, 27.45 MiB/s [2024-10-08T09:25:33.741Z] 7053.38 IOPS, 27.55 MiB/s [2024-10-08T09:25:33.741Z] 7077.27 IOPS, 27.65 MiB/s [2024-10-08T09:25:33.741Z] 7095.45 IOPS, 27.72 MiB/s [2024-10-08T09:25:33.741Z] 7110.34 IOPS, 27.77 MiB/s [2024-10-08T09:25:33.741Z] 7123.86 IOPS, 27.83 MiB/s [2024-10-08T09:25:33.741Z] 7139.02 IOPS, 27.89 MiB/s [2024-10-08T09:25:33.741Z] 7154.51 IOPS, 27.95 MiB/s [2024-10-08T09:25:33.741Z] 7172.24 IOPS, 28.02 MiB/s [2024-10-08T09:25:33.741Z] 7184.24 IOPS, 28.06 MiB/s [2024-10-08T09:25:33.741Z] 7196.95 IOPS, 28.11 MiB/s [2024-10-08T09:25:33.741Z] Received shutdown signal, test time was about 56.143984 seconds 00:19:42.058 00:19:42.058 Latency(us) 00:19:42.058 [2024-10-08T09:25:33.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.058 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.058 Verification LBA range: start 0x0 length 0x4000 00:19:42.058 Nvme0n1 : 56.14 7197.10 28.11 0.00 0.00 17755.57 210.39 7015926.69 00:19:42.058 [2024-10-08T09:25:33.741Z] =================================================================================================================== 00:19:42.058 [2024-10-08T09:25:33.741Z] Total : 7197.10 28.11 0.00 0.00 17755.57 210.39 7015926.69 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.058 rmmod nvme_tcp 00:19:42.058 rmmod nvme_fabrics 00:19:42.058 rmmod nvme_keyring 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@515 -- # '[' -n 81114 ']' 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # killprocess 81114 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 81114 ']' 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 81114 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81114 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:42.058 killing process with pid 81114 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81114' 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 81114 00:19:42.058 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 81114 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-save 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:42.317 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:42.318 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.318 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:42.318 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:42.577 00:19:42.577 real 1m2.799s 00:19:42.577 user 2m54.302s 00:19:42.577 sys 0m18.164s 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.577 ************************************ 00:19:42.577 END TEST nvmf_host_multipath 00:19:42.577 ************************************ 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.577 ************************************ 00:19:42.577 START TEST nvmf_timeout 00:19:42.577 ************************************ 00:19:42.577 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:42.837 * Looking for test storage... 00:19:42.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:42.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.837 --rc genhtml_branch_coverage=1 00:19:42.837 --rc genhtml_function_coverage=1 00:19:42.837 --rc genhtml_legend=1 00:19:42.837 --rc geninfo_all_blocks=1 00:19:42.837 --rc geninfo_unexecuted_blocks=1 00:19:42.837 00:19:42.837 ' 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:42.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.837 --rc genhtml_branch_coverage=1 00:19:42.837 --rc genhtml_function_coverage=1 00:19:42.837 --rc genhtml_legend=1 00:19:42.837 --rc geninfo_all_blocks=1 00:19:42.837 --rc geninfo_unexecuted_blocks=1 00:19:42.837 00:19:42.837 ' 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:42.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.837 --rc genhtml_branch_coverage=1 00:19:42.837 --rc genhtml_function_coverage=1 00:19:42.837 --rc genhtml_legend=1 00:19:42.837 --rc geninfo_all_blocks=1 00:19:42.837 --rc geninfo_unexecuted_blocks=1 00:19:42.837 00:19:42.837 ' 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:42.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.837 --rc genhtml_branch_coverage=1 00:19:42.837 --rc genhtml_function_coverage=1 00:19:42.837 --rc genhtml_legend=1 00:19:42.837 --rc geninfo_all_blocks=1 00:19:42.837 --rc geninfo_unexecuted_blocks=1 00:19:42.837 00:19:42.837 ' 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.837 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:42.838 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@458 -- # nvmf_veth_init 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:42.838 Cannot find device "nvmf_init_br" 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:42.838 Cannot find device "nvmf_init_br2" 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:42.838 Cannot find device "nvmf_tgt_br" 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.838 Cannot find device "nvmf_tgt_br2" 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:42.838 Cannot find device "nvmf_init_br" 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:42.838 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:43.097 Cannot find device "nvmf_init_br2" 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:43.097 Cannot find device "nvmf_tgt_br" 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:43.097 Cannot find device "nvmf_tgt_br2" 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:43.097 Cannot find device "nvmf_br" 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:43.097 Cannot find device "nvmf_init_if" 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:43.097 Cannot find device "nvmf_init_if2" 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:43.097 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:43.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:43.356 00:19:43.356 --- 10.0.0.3 ping statistics --- 00:19:43.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.356 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:43.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:43.356 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:19:43.356 00:19:43.356 --- 10.0.0.4 ping statistics --- 00:19:43.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.356 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:43.356 00:19:43.356 --- 10.0.0.1 ping statistics --- 00:19:43.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.356 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:43.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:19:43.356 00:19:43.356 --- 10.0.0.2 ping statistics --- 00:19:43.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.356 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # return 0 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # nvmfpid=82343 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # waitforlisten 82343 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82343 ']' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.356 09:25:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:43.356 [2024-10-08 09:25:34.951037] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:19:43.356 [2024-10-08 09:25:34.951158] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.615 [2024-10-08 09:25:35.091035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:43.615 [2024-10-08 09:25:35.203097] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.615 [2024-10-08 09:25:35.203171] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.615 [2024-10-08 09:25:35.203185] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.615 [2024-10-08 09:25:35.203196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.615 [2024-10-08 09:25:35.203206] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.615 [2024-10-08 09:25:35.204029] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.615 [2024-10-08 09:25:35.204052] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.615 [2024-10-08 09:25:35.283355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:44.559 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:44.559 [2024-10-08 09:25:36.242210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.818 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:44.818 Malloc0 00:19:45.077 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:45.077 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:45.336 09:25:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:45.596 [2024-10-08 09:25:37.138421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82392 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82392 /var/tmp/bdevperf.sock 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82392 ']' 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.596 09:25:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.596 [2024-10-08 09:25:37.215875] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:19:45.596 [2024-10-08 09:25:37.215965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82392 ] 00:19:45.857 [2024-10-08 09:25:37.355797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.857 [2024-10-08 09:25:37.452927] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.857 [2024-10-08 09:25:37.533543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:46.799 09:25:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.799 09:25:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:46.800 09:25:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:46.800 09:25:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:47.061 NVMe0n1 00:19:47.061 09:25:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82416 00:19:47.061 09:25:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:47.061 09:25:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:47.320 Running I/O for 10 seconds... 00:19:48.256 09:25:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:48.517 7785.00 IOPS, 30.41 MiB/s [2024-10-08T09:25:40.200Z] [2024-10-08 09:25:39.998939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.517 [2024-10-08 09:25:39.998990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.517 [2024-10-08 09:25:39.999204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.517 [2024-10-08 09:25:39.999215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:39.999986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:39.999995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:40.000005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:40.000014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.518 [2024-10-08 09:25:40.000024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.518 [2024-10-08 09:25:40.000032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.519 [2024-10-08 09:25:40.000734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.519 [2024-10-08 09:25:40.000752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.519 [2024-10-08 09:25:40.000790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.519 [2024-10-08 09:25:40.000815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.519 [2024-10-08 09:25:40.000826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.000834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.000852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.000870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.000887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.520 [2024-10-08 09:25:40.000919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.520 [2024-10-08 09:25:40.000938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.520 [2024-10-08 09:25:40.000956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.520 [2024-10-08 09:25:40.000974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.000984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.520 [2024-10-08 09:25:40.000992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:48.520 [2024-10-08 09:25:40.001010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.520 [2024-10-08 09:25:40.001468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe01cf0 is same with the state(6) to be set 00:19:48.520 [2024-10-08 09:25:40.001488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:48.520 [2024-10-08 09:25:40.001496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:48.520 [2024-10-08 09:25:40.001504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74720 len:8 PRP1 0x0 PRP2 0x0 00:19:48.520 [2024-10-08 09:25:40.001517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.520 [2024-10-08 09:25:40.001567] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe01cf0 was disconnected and freed. reset controller. 00:19:48.520 [2024-10-08 09:25:40.001780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:48.520 [2024-10-08 09:25:40.001844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd942e0 (9): Bad file descriptor 00:19:48.520 [2024-10-08 09:25:40.001914] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.520 [2024-10-08 09:25:40.001933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd942e0 with addr=10.0.0.3, port=4420 00:19:48.520 [2024-10-08 09:25:40.001942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd942e0 is same with the state(6) to be set 00:19:48.520 [2024-10-08 09:25:40.001957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd942e0 (9): Bad file descriptor 00:19:48.520 [2024-10-08 09:25:40.001971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:48.520 [2024-10-08 09:25:40.001980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:48.520 [2024-10-08 09:25:40.001989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:48.520 [2024-10-08 09:25:40.002006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:48.520 [2024-10-08 09:25:40.002015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:48.521 09:25:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:50.394 4606.50 IOPS, 17.99 MiB/s [2024-10-08T09:25:42.077Z] 3071.00 IOPS, 12.00 MiB/s [2024-10-08T09:25:42.077Z] [2024-10-08 09:25:42.002120] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.394 [2024-10-08 09:25:42.002166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd942e0 with addr=10.0.0.3, port=4420 00:19:50.394 [2024-10-08 09:25:42.002178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd942e0 is same with the state(6) to be set 00:19:50.394 [2024-10-08 09:25:42.002195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd942e0 (9): Bad file descriptor 00:19:50.394 [2024-10-08 09:25:42.002208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:50.394 [2024-10-08 09:25:42.002217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:50.394 [2024-10-08 09:25:42.002226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:50.394 [2024-10-08 09:25:42.002244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:50.394 [2024-10-08 09:25:42.002254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:50.394 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:50.394 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:50.394 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:50.653 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:50.653 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:50.653 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:50.653 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:50.911 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:50.912 09:25:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:52.549 2303.25 IOPS, 9.00 MiB/s [2024-10-08T09:25:44.232Z] 1842.60 IOPS, 7.20 MiB/s [2024-10-08T09:25:44.232Z] [2024-10-08 09:25:44.002423] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.549 [2024-10-08 09:25:44.002470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd942e0 with addr=10.0.0.3, port=4420 00:19:52.549 [2024-10-08 09:25:44.002482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd942e0 is same with the state(6) to be set 00:19:52.549 [2024-10-08 09:25:44.002499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd942e0 (9): Bad file descriptor 00:19:52.549 [2024-10-08 09:25:44.002513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.549 [2024-10-08 09:25:44.002521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.549 [2024-10-08 09:25:44.002530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.549 [2024-10-08 09:25:44.002547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.549 [2024-10-08 09:25:44.002556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.422 1535.50 IOPS, 6.00 MiB/s [2024-10-08T09:25:46.105Z] 1316.14 IOPS, 5.14 MiB/s [2024-10-08T09:25:46.105Z] [2024-10-08 09:25:46.002674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:54.422 [2024-10-08 09:25:46.002703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:54.422 [2024-10-08 09:25:46.002714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:54.422 [2024-10-08 09:25:46.002722] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:54.422 [2024-10-08 09:25:46.002749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:55.359 1151.62 IOPS, 4.50 MiB/s 00:19:55.359 Latency(us) 00:19:55.359 [2024-10-08T09:25:47.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.359 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.359 Verification LBA range: start 0x0 length 0x4000 00:19:55.359 NVMe0n1 : 8.18 1126.66 4.40 15.65 0.00 111931.68 2740.60 7015926.69 00:19:55.359 [2024-10-08T09:25:47.042Z] =================================================================================================================== 00:19:55.359 [2024-10-08T09:25:47.042Z] Total : 1126.66 4.40 15.65 0.00 111931.68 2740.60 7015926.69 00:19:55.359 { 00:19:55.359 "results": [ 00:19:55.359 { 00:19:55.359 "job": "NVMe0n1", 00:19:55.359 "core_mask": "0x4", 00:19:55.359 "workload": "verify", 00:19:55.359 "status": "finished", 00:19:55.359 "verify_range": { 00:19:55.359 "start": 0, 00:19:55.359 "length": 16384 00:19:55.359 }, 00:19:55.359 "queue_depth": 128, 00:19:55.359 "io_size": 4096, 00:19:55.359 "runtime": 8.17724, 00:19:55.359 "iops": 1126.6637643018917, 00:19:55.359 "mibps": 4.401030329304264, 00:19:55.359 "io_failed": 128, 00:19:55.359 "io_timeout": 0, 00:19:55.359 "avg_latency_us": 111931.67913577483, 00:19:55.359 "min_latency_us": 2740.5963636363635, 00:19:55.359 "max_latency_us": 7015926.69090909 00:19:55.359 } 00:19:55.359 ], 00:19:55.359 "core_count": 1 00:19:55.359 } 00:19:55.927 09:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:55.927 09:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:55.927 09:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:56.186 09:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:56.186 09:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:56.186 09:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:56.186 09:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82416 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82392 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82392 ']' 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82392 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.447 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82392 00:19:56.734 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:56.734 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:56.734 killing process with pid 82392 00:19:56.734 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82392' 00:19:56.734 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82392 00:19:56.734 Received shutdown signal, test time was about 9.324123 seconds 00:19:56.734 00:19:56.734 Latency(us) 00:19:56.734 [2024-10-08T09:25:48.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.734 [2024-10-08T09:25:48.417Z] =================================================================================================================== 00:19:56.734 [2024-10-08T09:25:48.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.734 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82392 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:56.992 [2024-10-08 09:25:48.627666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82539 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82539 /var/tmp/bdevperf.sock 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82539 ']' 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.992 09:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:57.251 [2024-10-08 09:25:48.701920] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:19:57.251 [2024-10-08 09:25:48.702015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82539 ] 00:19:57.251 [2024-10-08 09:25:48.832428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.251 [2024-10-08 09:25:48.924367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.509 [2024-10-08 09:25:49.000556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:58.074 09:25:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.074 09:25:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:58.074 09:25:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:58.331 09:25:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:58.590 NVMe0n1 00:19:58.590 09:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.590 09:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82562 00:19:58.590 09:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:58.848 Running I/O for 10 seconds... 00:19:59.786 09:25:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:59.786 8439.00 IOPS, 32.96 MiB/s [2024-10-08T09:25:51.469Z] [2024-10-08 09:25:51.404668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.786 [2024-10-08 09:25:51.404725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.786 [2024-10-08 09:25:51.404748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.786 [2024-10-08 09:25:51.404757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.786 [2024-10-08 09:25:51.404766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.786 [2024-10-08 09:25:51.404774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.786 [2024-10-08 09:25:51.404783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.786 [2024-10-08 09:25:51.404790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.786 [2024-10-08 09:25:51.404799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:19:59.786 [2024-10-08 09:25:51.405019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.786 [2024-10-08 09:25:51.405036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.786 [2024-10-08 09:25:51.405055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.786 [2024-10-08 09:25:51.405066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.787 [2024-10-08 09:25:51.405838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.787 [2024-10-08 09:25:51.405847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.405989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.405998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.788 [2024-10-08 09:25:51.406611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.788 [2024-10-08 09:25:51.406631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.406983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.406992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.789 [2024-10-08 09:25:51.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.789 [2024-10-08 09:25:51.407408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.789 [2024-10-08 09:25:51.407416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194bcf0 is same with the state(6) to be set 00:19:59.790 [2024-10-08 09:25:51.407427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.790 [2024-10-08 09:25:51.407433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.790 [2024-10-08 09:25:51.407440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76888 len:8 PRP1 0x0 PRP2 0x0 00:19:59.790 [2024-10-08 09:25:51.407448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.790 [2024-10-08 09:25:51.407534] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194bcf0 was disconnected and freed. reset controller. 00:19:59.790 [2024-10-08 09:25:51.407747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:59.790 [2024-10-08 09:25:51.407782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:19:59.790 [2024-10-08 09:25:51.407874] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.790 [2024-10-08 09:25:51.407895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de2e0 with addr=10.0.0.3, port=4420 00:19:59.790 [2024-10-08 09:25:51.407906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:19:59.790 [2024-10-08 09:25:51.407923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:19:59.790 [2024-10-08 09:25:51.407944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.790 [2024-10-08 09:25:51.407954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:59.790 [2024-10-08 09:25:51.407965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.790 [2024-10-08 09:25:51.407984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.790 [2024-10-08 09:25:51.407995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:59.790 09:25:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:00.726 4742.00 IOPS, 18.52 MiB/s [2024-10-08T09:25:52.409Z] [2024-10-08 09:25:52.408085] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.726 [2024-10-08 09:25:52.408142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de2e0 with addr=10.0.0.3, port=4420 00:20:00.726 [2024-10-08 09:25:52.408154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:20:00.726 [2024-10-08 09:25:52.408171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:20:00.726 [2024-10-08 09:25:52.408185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.726 [2024-10-08 09:25:52.408194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:00.726 [2024-10-08 09:25:52.408202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.726 [2024-10-08 09:25:52.408220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.726 [2024-10-08 09:25:52.408229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.985 09:25:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:01.243 [2024-10-08 09:25:52.697767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.243 09:25:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82562 00:20:01.811 3161.33 IOPS, 12.35 MiB/s [2024-10-08T09:25:53.494Z] [2024-10-08 09:25:53.419496] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:03.682 2371.00 IOPS, 9.26 MiB/s [2024-10-08T09:25:56.301Z] 3359.60 IOPS, 13.12 MiB/s [2024-10-08T09:25:57.677Z] 4207.67 IOPS, 16.44 MiB/s [2024-10-08T09:25:58.611Z] 4813.43 IOPS, 18.80 MiB/s [2024-10-08T09:25:59.546Z] 5267.75 IOPS, 20.58 MiB/s [2024-10-08T09:26:00.498Z] 5622.78 IOPS, 21.96 MiB/s [2024-10-08T09:26:00.498Z] 5903.80 IOPS, 23.06 MiB/s 00:20:08.815 Latency(us) 00:20:08.815 [2024-10-08T09:26:00.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.815 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:08.815 Verification LBA range: start 0x0 length 0x4000 00:20:08.815 NVMe0n1 : 10.00 5914.28 23.10 0.00 0.00 21615.25 2055.45 3019898.88 00:20:08.815 [2024-10-08T09:26:00.498Z] =================================================================================================================== 00:20:08.815 [2024-10-08T09:26:00.498Z] Total : 5914.28 23.10 0.00 0.00 21615.25 2055.45 3019898.88 00:20:08.815 { 00:20:08.815 "results": [ 00:20:08.815 { 00:20:08.815 "job": "NVMe0n1", 00:20:08.815 "core_mask": "0x4", 00:20:08.815 "workload": "verify", 00:20:08.815 "status": "finished", 00:20:08.815 "verify_range": { 00:20:08.815 "start": 0, 00:20:08.815 "length": 16384 00:20:08.815 }, 00:20:08.815 "queue_depth": 128, 00:20:08.815 "io_size": 4096, 00:20:08.815 "runtime": 10.003915, 00:20:08.815 "iops": 5914.284557595702, 00:20:08.815 "mibps": 23.10267405310821, 00:20:08.815 "io_failed": 0, 00:20:08.815 "io_timeout": 0, 00:20:08.815 "avg_latency_us": 21615.25406790755, 00:20:08.815 "min_latency_us": 2055.447272727273, 00:20:08.815 "max_latency_us": 3019898.88 00:20:08.815 } 00:20:08.815 ], 00:20:08.815 "core_count": 1 00:20:08.815 } 00:20:08.815 09:26:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82667 00:20:08.815 09:26:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.815 09:26:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:08.815 Running I/O for 10 seconds... 00:20:09.752 09:26:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:10.013 7914.00 IOPS, 30.91 MiB/s [2024-10-08T09:26:01.696Z] [2024-10-08 09:26:01.559545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.559982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.559991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.013 [2024-10-08 09:26:01.560231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.013 [2024-10-08 09:26:01.560613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.560892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.560997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.561315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.561977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.561984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.562083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.014 [2024-10-08 09:26:01.562104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.562121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.562522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.562540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.562557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.562574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.014 [2024-10-08 09:26:01.562591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.014 [2024-10-08 09:26:01.562600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.015 [2024-10-08 09:26:01.562793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.015 [2024-10-08 09:26:01.562947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.562983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.562993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.015 [2024-10-08 09:26:01.563418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.015 [2024-10-08 09:26:01.563427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.016 [2024-10-08 09:26:01.563647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194d490 is same with the state(6) to be set 00:20:10.016 [2024-10-08 09:26:01.563677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:10.016 [2024-10-08 09:26:01.563684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:10.016 [2024-10-08 09:26:01.563691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71184 len:8 PRP1 0x0 PRP2 0x0 00:20:10.016 [2024-10-08 09:26:01.563700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.563760] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194d490 was disconnected and freed. reset controller. 00:20:10.016 [2024-10-08 09:26:01.564753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.016 [2024-10-08 09:26:01.565098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.565405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.016 [2024-10-08 09:26:01.565817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.566143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.016 [2024-10-08 09:26:01.566460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.566765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.016 [2024-10-08 09:26:01.567167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.016 [2024-10-08 09:26:01.567343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:20:10.016 [2024-10-08 09:26:01.567622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.016 [2024-10-08 09:26:01.567646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:20:10.016 [2024-10-08 09:26:01.567728] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.016 [2024-10-08 09:26:01.567761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de2e0 with addr=10.0.0.3, port=4420 00:20:10.016 [2024-10-08 09:26:01.567775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:20:10.016 [2024-10-08 09:26:01.567792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:20:10.016 [2024-10-08 09:26:01.567807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:10.016 [2024-10-08 09:26:01.567815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:10.016 [2024-10-08 09:26:01.567825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:10.016 [2024-10-08 09:26:01.567843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.016 [2024-10-08 09:26:01.567853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.016 09:26:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:10.952 4412.00 IOPS, 17.23 MiB/s [2024-10-08T09:26:02.635Z] [2024-10-08 09:26:02.567921] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.952 [2024-10-08 09:26:02.568274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de2e0 with addr=10.0.0.3, port=4420 00:20:10.952 [2024-10-08 09:26:02.568629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:20:10.952 [2024-10-08 09:26:02.569071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:20:10.952 [2024-10-08 09:26:02.569450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:10.952 [2024-10-08 09:26:02.569801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:10.952 [2024-10-08 09:26:02.570196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:10.952 [2024-10-08 09:26:02.570421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.952 [2024-10-08 09:26:02.570614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:12.146 2941.33 IOPS, 11.49 MiB/s [2024-10-08T09:26:03.829Z] [2024-10-08 09:26:03.571064] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.146 [2024-10-08 09:26:03.571388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de2e0 with addr=10.0.0.3, port=4420 00:20:12.146 [2024-10-08 09:26:03.571850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:20:12.146 [2024-10-08 09:26:03.572224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:20:12.146 [2024-10-08 09:26:03.572637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:12.146 [2024-10-08 09:26:03.573012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:12.146 [2024-10-08 09:26:03.573345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:12.146 [2024-10-08 09:26:03.573553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.146 [2024-10-08 09:26:03.573767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.082 2206.00 IOPS, 8.62 MiB/s [2024-10-08T09:26:04.765Z] [2024-10-08 09:26:04.574564] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.082 [2024-10-08 09:26:04.574961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18de2e0 with addr=10.0.0.3, port=4420 00:20:13.082 [2024-10-08 09:26:04.574984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de2e0 is same with the state(6) to be set 00:20:13.082 [2024-10-08 09:26:04.575230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de2e0 (9): Bad file descriptor 00:20:13.082 [2024-10-08 09:26:04.575454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.082 [2024-10-08 09:26:04.575468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.082 [2024-10-08 09:26:04.575476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.082 [2024-10-08 09:26:04.578791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.082 [2024-10-08 09:26:04.579125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.082 09:26:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:13.341 [2024-10-08 09:26:04.864810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:13.341 09:26:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82667 00:20:14.166 1764.80 IOPS, 6.89 MiB/s [2024-10-08T09:26:05.849Z] [2024-10-08 09:26:05.611630] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:16.035 2759.83 IOPS, 10.78 MiB/s [2024-10-08T09:26:08.653Z] 3741.57 IOPS, 14.62 MiB/s [2024-10-08T09:26:09.588Z] 4462.62 IOPS, 17.43 MiB/s [2024-10-08T09:26:10.523Z] 5024.78 IOPS, 19.63 MiB/s [2024-10-08T09:26:10.523Z] 5488.30 IOPS, 21.44 MiB/s 00:20:18.840 Latency(us) 00:20:18.840 [2024-10-08T09:26:10.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.840 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.840 Verification LBA range: start 0x0 length 0x4000 00:20:18.840 NVMe0n1 : 10.01 5495.28 21.47 4713.22 0.00 12510.74 543.65 3019898.88 00:20:18.840 [2024-10-08T09:26:10.523Z] =================================================================================================================== 00:20:18.840 [2024-10-08T09:26:10.523Z] Total : 5495.28 21.47 4713.22 0.00 12510.74 0.00 3019898.88 00:20:18.840 { 00:20:18.840 "results": [ 00:20:18.840 { 00:20:18.840 "job": "NVMe0n1", 00:20:18.840 "core_mask": "0x4", 00:20:18.840 "workload": "verify", 00:20:18.840 "status": "finished", 00:20:18.840 "verify_range": { 00:20:18.840 "start": 0, 00:20:18.840 "length": 16384 00:20:18.840 }, 00:20:18.840 "queue_depth": 128, 00:20:18.840 "io_size": 4096, 00:20:18.840 "runtime": 10.006952, 00:20:18.840 "iops": 5495.2796815653755, 00:20:18.841 "mibps": 21.465936256114748, 00:20:18.841 "io_failed": 47165, 00:20:18.841 "io_timeout": 0, 00:20:18.841 "avg_latency_us": 12510.737632925044, 00:20:18.841 "min_latency_us": 543.6509090909091, 00:20:18.841 "max_latency_us": 3019898.88 00:20:18.841 } 00:20:18.841 ], 00:20:18.841 "core_count": 1 00:20:18.841 } 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82539 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82539 ']' 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82539 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82539 00:20:18.841 killing process with pid 82539 00:20:18.841 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.841 00:20:18.841 Latency(us) 00:20:18.841 [2024-10-08T09:26:10.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.841 [2024-10-08T09:26:10.524Z] =================================================================================================================== 00:20:18.841 [2024-10-08T09:26:10.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82539' 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82539 00:20:18.841 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82539 00:20:19.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.407 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82781 00:20:19.407 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:19.407 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82781 /var/tmp/bdevperf.sock 00:20:19.408 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82781 ']' 00:20:19.408 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.408 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.408 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.408 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.408 09:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.408 [2024-10-08 09:26:10.862031] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:20:19.408 [2024-10-08 09:26:10.862367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82781 ] 00:20:19.408 [2024-10-08 09:26:10.997096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.408 [2024-10-08 09:26:11.081122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.665 [2024-10-08 09:26:11.156031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:20.232 09:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.232 09:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:20.232 09:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82796 00:20:20.232 09:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82781 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:20.232 09:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:20.491 09:26:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:20.750 NVMe0n1 00:20:20.750 09:26:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82839 00:20:20.750 09:26:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:20.750 09:26:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:21.019 Running I/O for 10 seconds... 00:20:21.971 09:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:21.971 18034.00 IOPS, 70.45 MiB/s [2024-10-08T09:26:13.654Z] [2024-10-08 09:26:13.632387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.632614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.632787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.632946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.633088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.633199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.633307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.633326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.971 [2024-10-08 09:26:13.633429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with t[2024-10-08 09:26:13.633487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.971 he state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.633601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.633891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with t[2024-10-08 09:26:13.633505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.971 [2024-10-08 09:26:13.633954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.971 [2024-10-08 09:26:13.633971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.971 [2024-10-08 09:26:13.633980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.971 [2024-10-08 09:26:13.633989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.971 he state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.971 [2024-10-08 09:26:13.634723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b2e0 is same [2024-10-08 09:26:13.634742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with twith the state(6) to be set 00:20:21.971 he state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.971 [2024-10-08 09:26:13.634859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.634993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.972 [2024-10-08 09:26:13.635548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.973 [2024-10-08 09:26:13.635556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.973 [2024-10-08 09:26:13.635565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.973 [2024-10-08 09:26:13.635572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5250 is same with the state(6) to be set 00:20:21.973 [2024-10-08 09:26:13.635643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.635985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.635996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.636913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.636936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.973 [2024-10-08 09:26:13.637256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.973 [2024-10-08 09:26:13.637264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.637983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.637993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.638002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.638012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.638022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.638032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.638041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.638051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.974 [2024-10-08 09:26:13.638068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.974 [2024-10-08 09:26:13.638079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.975 [2024-10-08 09:26:13.638855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.975 [2024-10-08 09:26:13.638865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.638873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.638884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.638893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.638902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.638910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.638921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.638929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.638939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.638947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.638957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.638965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.638982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.638991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.639001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.639015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.639026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.639034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.639045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.639053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.639063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.639071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.639082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.976 [2024-10-08 09:26:13.639090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.639098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408d00 is same with the state(6) to be set 00:20:21.976 [2024-10-08 09:26:13.639109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:21.976 [2024-10-08 09:26:13.639116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:21.976 [2024-10-08 09:26:13.639130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:8 PRP1 0x0 PRP2 0x0 00:20:21.976 [2024-10-08 09:26:13.639138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.976 [2024-10-08 09:26:13.639225] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1408d00 was disconnected and freed. reset controller. 00:20:21.976 [2024-10-08 09:26:13.640077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.976 [2024-10-08 09:26:13.640124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b2e0 (9): Bad file descriptor 00:20:21.976 [2024-10-08 09:26:13.640231] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.976 [2024-10-08 09:26:13.640252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b2e0 with addr=10.0.0.3, port=4420 00:20:21.976 [2024-10-08 09:26:13.640263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b2e0 is same with the state(6) to be set 00:20:21.976 [2024-10-08 09:26:13.640280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b2e0 (9): Bad file descriptor 00:20:21.976 [2024-10-08 09:26:13.640294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.976 [2024-10-08 09:26:13.640302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.976 [2024-10-08 09:26:13.640313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.976 [2024-10-08 09:26:13.640332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.976 [2024-10-08 09:26:13.640342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.235 09:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82839 00:20:24.107 9779.00 IOPS, 38.20 MiB/s [2024-10-08T09:26:15.790Z] 6519.33 IOPS, 25.47 MiB/s [2024-10-08T09:26:15.790Z] [2024-10-08 09:26:15.640425] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.107 [2024-10-08 09:26:15.640475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b2e0 with addr=10.0.0.3, port=4420 00:20:24.107 [2024-10-08 09:26:15.640489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b2e0 is same with the state(6) to be set 00:20:24.107 [2024-10-08 09:26:15.640505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b2e0 (9): Bad file descriptor 00:20:24.107 [2024-10-08 09:26:15.640519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:24.107 [2024-10-08 09:26:15.640527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:24.107 [2024-10-08 09:26:15.640535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:24.107 [2024-10-08 09:26:15.640552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:24.107 [2024-10-08 09:26:15.640561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:25.978 4889.50 IOPS, 19.10 MiB/s [2024-10-08T09:26:17.661Z] 3911.60 IOPS, 15.28 MiB/s [2024-10-08T09:26:17.661Z] [2024-10-08 09:26:17.640685] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.978 [2024-10-08 09:26:17.640717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b2e0 with addr=10.0.0.3, port=4420 00:20:25.978 [2024-10-08 09:26:17.640730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b2e0 is same with the state(6) to be set 00:20:25.978 [2024-10-08 09:26:17.640788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b2e0 (9): Bad file descriptor 00:20:25.978 [2024-10-08 09:26:17.640804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:25.978 [2024-10-08 09:26:17.640813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:25.978 [2024-10-08 09:26:17.640821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:25.978 [2024-10-08 09:26:17.640839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.978 [2024-10-08 09:26:17.640849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:28.289 3259.67 IOPS, 12.73 MiB/s [2024-10-08T09:26:19.972Z] 2794.00 IOPS, 10.91 MiB/s [2024-10-08T09:26:19.972Z] [2024-10-08 09:26:19.640977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:28.289 [2024-10-08 09:26:19.641008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:28.289 [2024-10-08 09:26:19.641030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:28.289 [2024-10-08 09:26:19.641038] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:28.289 [2024-10-08 09:26:19.641056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:29.226 2444.75 IOPS, 9.55 MiB/s 00:20:29.226 Latency(us) 00:20:29.226 [2024-10-08T09:26:20.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.226 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:29.226 NVMe0n1 : 8.10 2414.55 9.43 15.80 0.00 52568.55 6762.12 7046430.72 00:20:29.226 [2024-10-08T09:26:20.909Z] =================================================================================================================== 00:20:29.226 [2024-10-08T09:26:20.909Z] Total : 2414.55 9.43 15.80 0.00 52568.55 6762.12 7046430.72 00:20:29.226 { 00:20:29.226 "results": [ 00:20:29.226 { 00:20:29.226 "job": "NVMe0n1", 00:20:29.226 "core_mask": "0x4", 00:20:29.226 "workload": "randread", 00:20:29.226 "status": "finished", 00:20:29.226 "queue_depth": 128, 00:20:29.226 "io_size": 4096, 00:20:29.226 "runtime": 8.100051, 00:20:29.226 "iops": 2414.5526984953553, 00:20:29.226 "mibps": 9.431846478497482, 00:20:29.226 "io_failed": 128, 00:20:29.226 "io_timeout": 0, 00:20:29.226 "avg_latency_us": 52568.54890877688, 00:20:29.226 "min_latency_us": 6762.123636363636, 00:20:29.226 "max_latency_us": 7046430.72 00:20:29.226 } 00:20:29.226 ], 00:20:29.226 "core_count": 1 00:20:29.226 } 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:29.226 Attaching 5 probes... 00:20:29.226 1294.487487: reset bdev controller NVMe0 00:20:29.226 1294.592140: reconnect bdev controller NVMe0 00:20:29.226 3294.796686: reconnect delay bdev controller NVMe0 00:20:29.226 3294.810323: reconnect bdev controller NVMe0 00:20:29.226 5295.058965: reconnect delay bdev controller NVMe0 00:20:29.226 5295.072170: reconnect bdev controller NVMe0 00:20:29.226 7295.371978: reconnect delay bdev controller NVMe0 00:20:29.226 7295.391984: reconnect bdev controller NVMe0 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82796 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82781 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82781 ']' 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82781 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82781 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82781' 00:20:29.226 killing process with pid 82781 00:20:29.226 Received shutdown signal, test time was about 8.168640 seconds 00:20:29.226 00:20:29.226 Latency(us) 00:20:29.226 [2024-10-08T09:26:20.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.226 [2024-10-08T09:26:20.909Z] =================================================================================================================== 00:20:29.226 [2024-10-08T09:26:20.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82781 00:20:29.226 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82781 00:20:29.485 09:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:29.744 rmmod nvme_tcp 00:20:29.744 rmmod nvme_fabrics 00:20:29.744 rmmod nvme_keyring 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@515 -- # '[' -n 82343 ']' 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # killprocess 82343 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82343 ']' 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82343 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82343 00:20:29.744 killing process with pid 82343 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82343' 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82343 00:20:29.744 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82343 00:20:30.002 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:30.002 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:30.002 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:30.002 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:30.002 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-save 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:30.261 00:20:30.261 real 0m47.708s 00:20:30.261 user 2m18.593s 00:20:30.261 sys 0m6.095s 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.261 09:26:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:30.261 ************************************ 00:20:30.261 END TEST nvmf_timeout 00:20:30.261 ************************************ 00:20:30.520 09:26:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:30.520 09:26:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:30.520 00:20:30.520 real 5m19.033s 00:20:30.520 user 13m46.126s 00:20:30.520 sys 1m12.037s 00:20:30.520 09:26:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.520 09:26:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.520 ************************************ 00:20:30.520 END TEST nvmf_host 00:20:30.520 ************************************ 00:20:30.520 09:26:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:30.520 09:26:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:30.520 00:20:30.520 real 13m8.523s 00:20:30.520 user 31m32.023s 00:20:30.520 sys 3m14.884s 00:20:30.520 09:26:22 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.520 ************************************ 00:20:30.520 END TEST nvmf_tcp 00:20:30.520 ************************************ 00:20:30.520 09:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.520 09:26:22 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:20:30.520 09:26:22 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:30.521 09:26:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:30.521 09:26:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.521 09:26:22 -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 ************************************ 00:20:30.521 START TEST nvmf_dif 00:20:30.521 ************************************ 00:20:30.521 09:26:22 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:30.521 * Looking for test storage... 00:20:30.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:30.521 09:26:22 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:30.521 09:26:22 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:20:30.521 09:26:22 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:30.780 09:26:22 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.780 09:26:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:30.780 09:26:22 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.780 09:26:22 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:30.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.780 --rc genhtml_branch_coverage=1 00:20:30.780 --rc genhtml_function_coverage=1 00:20:30.780 --rc genhtml_legend=1 00:20:30.780 --rc geninfo_all_blocks=1 00:20:30.780 --rc geninfo_unexecuted_blocks=1 00:20:30.780 00:20:30.780 ' 00:20:30.780 09:26:22 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:30.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.780 --rc genhtml_branch_coverage=1 00:20:30.780 --rc genhtml_function_coverage=1 00:20:30.780 --rc genhtml_legend=1 00:20:30.780 --rc geninfo_all_blocks=1 00:20:30.780 --rc geninfo_unexecuted_blocks=1 00:20:30.780 00:20:30.780 ' 00:20:30.780 09:26:22 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:30.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.780 --rc genhtml_branch_coverage=1 00:20:30.780 --rc genhtml_function_coverage=1 00:20:30.780 --rc genhtml_legend=1 00:20:30.780 --rc geninfo_all_blocks=1 00:20:30.781 --rc geninfo_unexecuted_blocks=1 00:20:30.781 00:20:30.781 ' 00:20:30.781 09:26:22 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:30.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.781 --rc genhtml_branch_coverage=1 00:20:30.781 --rc genhtml_function_coverage=1 00:20:30.781 --rc genhtml_legend=1 00:20:30.781 --rc geninfo_all_blocks=1 00:20:30.781 --rc geninfo_unexecuted_blocks=1 00:20:30.781 00:20:30.781 ' 00:20:30.781 09:26:22 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:30.781 09:26:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.781 09:26:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.781 09:26:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.781 09:26:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.781 09:26:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 09:26:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 09:26:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 09:26:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:30.781 09:26:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.781 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.781 09:26:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:30.781 09:26:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:30.781 09:26:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:30.781 09:26:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:30.781 09:26:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.781 09:26:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:30.781 09:26:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:30.781 Cannot find device "nvmf_init_br" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:30.781 Cannot find device "nvmf_init_br2" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:30.781 Cannot find device "nvmf_tgt_br" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.781 Cannot find device "nvmf_tgt_br2" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:30.781 Cannot find device "nvmf_init_br" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:30.781 Cannot find device "nvmf_init_br2" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:30.781 Cannot find device "nvmf_tgt_br" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:30.781 Cannot find device "nvmf_tgt_br2" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:30.781 Cannot find device "nvmf_br" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:30.781 Cannot find device "nvmf_init_if" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:30.781 Cannot find device "nvmf_init_if2" 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.781 09:26:22 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:31.040 09:26:22 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:31.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:31.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:20:31.041 00:20:31.041 --- 10.0.0.3 ping statistics --- 00:20:31.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.041 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:31.041 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:31.041 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:20:31.041 00:20:31.041 --- 10.0.0.4 ping statistics --- 00:20:31.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.041 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:31.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:31.041 00:20:31.041 --- 10.0.0.1 ping statistics --- 00:20:31.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.041 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:31.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:20:31.041 00:20:31.041 --- 10.0.0.2 ping statistics --- 00:20:31.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.041 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@459 -- # return 0 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:20:31.041 09:26:22 nvmf_dif -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:31.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.608 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:31.608 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:31.608 09:26:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:31.608 09:26:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=83334 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:31.608 09:26:23 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 83334 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 83334 ']' 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:31.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:31.608 09:26:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:31.608 [2024-10-08 09:26:23.183496] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:20:31.608 [2024-10-08 09:26:23.183578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.867 [2024-10-08 09:26:23.325676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.867 [2024-10-08 09:26:23.422572] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.867 [2024-10-08 09:26:23.422629] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.867 [2024-10-08 09:26:23.422643] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.867 [2024-10-08 09:26:23.422654] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.867 [2024-10-08 09:26:23.422663] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.867 [2024-10-08 09:26:23.423154] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.867 [2024-10-08 09:26:23.480336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:20:32.804 09:26:24 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:32.804 09:26:24 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.804 09:26:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:32.804 09:26:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:32.804 [2024-10-08 09:26:24.217462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.804 09:26:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:32.804 09:26:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:32.804 ************************************ 00:20:32.804 START TEST fio_dif_1_default 00:20:32.804 ************************************ 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:32.804 bdev_null0 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:32.804 [2024-10-08 09:26:24.261603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:32.804 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:32.804 { 00:20:32.804 "params": { 00:20:32.804 "name": "Nvme$subsystem", 00:20:32.804 "trtype": "$TEST_TRANSPORT", 00:20:32.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.805 "adrfam": "ipv4", 00:20:32.805 "trsvcid": "$NVMF_PORT", 00:20:32.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.805 "hdgst": ${hdgst:-false}, 00:20:32.805 "ddgst": ${ddgst:-false} 00:20:32.805 }, 00:20:32.805 "method": "bdev_nvme_attach_controller" 00:20:32.805 } 00:20:32.805 EOF 00:20:32.805 )") 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:32.805 "params": { 00:20:32.805 "name": "Nvme0", 00:20:32.805 "trtype": "tcp", 00:20:32.805 "traddr": "10.0.0.3", 00:20:32.805 "adrfam": "ipv4", 00:20:32.805 "trsvcid": "4420", 00:20:32.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:32.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:32.805 "hdgst": false, 00:20:32.805 "ddgst": false 00:20:32.805 }, 00:20:32.805 "method": "bdev_nvme_attach_controller" 00:20:32.805 }' 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:32.805 09:26:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:33.064 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:33.064 fio-3.35 00:20:33.064 Starting 1 thread 00:20:45.274 00:20:45.274 filename0: (groupid=0, jobs=1): err= 0: pid=83401: Tue Oct 8 09:26:35 2024 00:20:45.274 read: IOPS=9771, BW=38.2MiB/s (40.0MB/s)(382MiB/10001msec) 00:20:45.274 slat (nsec): min=5820, max=72657, avg=7482.18, stdev=3117.54 00:20:45.274 clat (usec): min=327, max=2910, avg=386.89, stdev=36.30 00:20:45.274 lat (usec): min=333, max=2920, avg=394.38, stdev=37.04 00:20:45.274 clat percentiles (usec): 00:20:45.274 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 359], 00:20:45.274 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 392], 00:20:45.274 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 429], 95.00th=[ 441], 00:20:45.274 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 545], 99.95th=[ 578], 00:20:45.274 | 99.99th=[ 865] 00:20:45.274 bw ( KiB/s): min=38067, max=40128, per=100.00%, avg=39135.32, stdev=503.31, samples=19 00:20:45.274 iops : min= 9516, max=10032, avg=9783.79, stdev=125.92, samples=19 00:20:45.274 lat (usec) : 500=99.64%, 750=0.34%, 1000=0.02% 00:20:45.274 lat (msec) : 4=0.01% 00:20:45.274 cpu : usr=83.67%, sys=14.39%, ctx=22, majf=0, minf=0 00:20:45.274 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:45.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.274 issued rwts: total=97728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.274 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:45.274 00:20:45.274 Run status group 0 (all jobs): 00:20:45.274 READ: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=382MiB (400MB), run=10001-10001msec 00:20:45.274 09:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:45.274 09:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:45.274 09:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:45.274 09:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:45.274 09:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:45.274 09:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 ************************************ 00:20:45.275 END TEST fio_dif_1_default 00:20:45.275 ************************************ 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 00:20:45.275 real 0m11.131s 00:20:45.275 user 0m9.117s 00:20:45.275 sys 0m1.751s 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 09:26:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:45.275 09:26:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:45.275 09:26:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 ************************************ 00:20:45.275 START TEST fio_dif_1_multi_subsystems 00:20:45.275 ************************************ 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 bdev_null0 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 [2024-10-08 09:26:35.455890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 bdev_null1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:45.275 { 00:20:45.275 "params": { 00:20:45.275 "name": "Nvme$subsystem", 00:20:45.275 "trtype": "$TEST_TRANSPORT", 00:20:45.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.275 "adrfam": "ipv4", 00:20:45.275 "trsvcid": "$NVMF_PORT", 00:20:45.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.275 "hdgst": ${hdgst:-false}, 00:20:45.275 "ddgst": ${ddgst:-false} 00:20:45.275 }, 00:20:45.275 "method": "bdev_nvme_attach_controller" 00:20:45.275 } 00:20:45.275 EOF 00:20:45.275 )") 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:45.275 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:45.276 { 00:20:45.276 "params": { 00:20:45.276 "name": "Nvme$subsystem", 00:20:45.276 "trtype": "$TEST_TRANSPORT", 00:20:45.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.276 "adrfam": "ipv4", 00:20:45.276 "trsvcid": "$NVMF_PORT", 00:20:45.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.276 "hdgst": ${hdgst:-false}, 00:20:45.276 "ddgst": ${ddgst:-false} 00:20:45.276 }, 00:20:45.276 "method": "bdev_nvme_attach_controller" 00:20:45.276 } 00:20:45.276 EOF 00:20:45.276 )") 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:45.276 "params": { 00:20:45.276 "name": "Nvme0", 00:20:45.276 "trtype": "tcp", 00:20:45.276 "traddr": "10.0.0.3", 00:20:45.276 "adrfam": "ipv4", 00:20:45.276 "trsvcid": "4420", 00:20:45.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:45.276 "hdgst": false, 00:20:45.276 "ddgst": false 00:20:45.276 }, 00:20:45.276 "method": "bdev_nvme_attach_controller" 00:20:45.276 },{ 00:20:45.276 "params": { 00:20:45.276 "name": "Nvme1", 00:20:45.276 "trtype": "tcp", 00:20:45.276 "traddr": "10.0.0.3", 00:20:45.276 "adrfam": "ipv4", 00:20:45.276 "trsvcid": "4420", 00:20:45.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.276 "hdgst": false, 00:20:45.276 "ddgst": false 00:20:45.276 }, 00:20:45.276 "method": "bdev_nvme_attach_controller" 00:20:45.276 }' 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:45.276 09:26:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:45.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:45.276 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:45.276 fio-3.35 00:20:45.276 Starting 2 threads 00:20:55.254 00:20:55.254 filename0: (groupid=0, jobs=1): err= 0: pid=83566: Tue Oct 8 09:26:46 2024 00:20:55.254 read: IOPS=4882, BW=19.1MiB/s (20.0MB/s)(191MiB/10001msec) 00:20:55.254 slat (nsec): min=5830, max=96734, avg=20606.97, stdev=9664.47 00:20:55.254 clat (usec): min=585, max=2350, avg=764.22, stdev=64.71 00:20:55.254 lat (usec): min=592, max=2378, avg=784.83, stdev=67.35 00:20:55.254 clat percentiles (usec): 00:20:55.254 | 1.00th=[ 635], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 709], 00:20:55.254 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 775], 00:20:55.254 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:20:55.254 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1029], 99.95th=[ 1139], 00:20:55.254 | 99.99th=[ 1369] 00:20:55.254 bw ( KiB/s): min=18682, max=19904, per=49.99%, avg=19529.79, stdev=328.77, samples=19 00:20:55.254 iops : min= 4670, max= 4976, avg=4882.42, stdev=82.26, samples=19 00:20:55.254 lat (usec) : 750=43.57%, 1000=56.28% 00:20:55.254 lat (msec) : 2=0.14%, 4=0.01% 00:20:55.254 cpu : usr=93.89%, sys=4.84%, ctx=10, majf=0, minf=0 00:20:55.254 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.254 issued rwts: total=48832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.254 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:55.254 filename1: (groupid=0, jobs=1): err= 0: pid=83567: Tue Oct 8 09:26:46 2024 00:20:55.254 read: IOPS=4883, BW=19.1MiB/s (20.0MB/s)(191MiB/10001msec) 00:20:55.254 slat (nsec): min=5889, max=97055, avg=21274.77, stdev=10110.56 00:20:55.254 clat (usec): min=391, max=1694, avg=761.31, stdev=59.85 00:20:55.254 lat (usec): min=398, max=1731, avg=782.58, stdev=62.20 00:20:55.254 clat percentiles (usec): 00:20:55.254 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 709], 00:20:55.255 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 766], 00:20:55.255 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:20:55.255 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 1012], 99.95th=[ 1123], 00:20:55.255 | 99.99th=[ 1369] 00:20:55.255 bw ( KiB/s): min=18720, max=19904, per=50.00%, avg=19531.79, stdev=324.45, samples=19 00:20:55.255 iops : min= 4680, max= 4976, avg=4882.95, stdev=81.11, samples=19 00:20:55.255 lat (usec) : 500=0.02%, 750=46.70%, 1000=53.16% 00:20:55.255 lat (msec) : 2=0.13% 00:20:55.255 cpu : usr=92.97%, sys=5.70%, ctx=17, majf=0, minf=0 00:20:55.255 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.255 issued rwts: total=48840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.255 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:55.255 00:20:55.255 Run status group 0 (all jobs): 00:20:55.255 READ: bw=38.1MiB/s (40.0MB/s), 19.1MiB/s-19.1MiB/s (20.0MB/s-20.0MB/s), io=382MiB (400MB), run=10001-10001msec 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 ************************************ 00:20:55.255 END TEST fio_dif_1_multi_subsystems 00:20:55.255 ************************************ 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 00:20:55.255 real 0m11.211s 00:20:55.255 user 0m19.509s 00:20:55.255 sys 0m1.377s 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 09:26:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:55.255 09:26:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:55.255 09:26:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 ************************************ 00:20:55.255 START TEST fio_dif_rand_params 00:20:55.255 ************************************ 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 bdev_null0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 [2024-10-08 09:26:46.734165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:55.255 { 00:20:55.255 "params": { 00:20:55.255 "name": "Nvme$subsystem", 00:20:55.255 "trtype": "$TEST_TRANSPORT", 00:20:55.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.255 "adrfam": "ipv4", 00:20:55.255 "trsvcid": "$NVMF_PORT", 00:20:55.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.255 "hdgst": ${hdgst:-false}, 00:20:55.255 "ddgst": ${ddgst:-false} 00:20:55.255 }, 00:20:55.255 "method": "bdev_nvme_attach_controller" 00:20:55.255 } 00:20:55.255 EOF 00:20:55.255 )") 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.255 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:55.256 "params": { 00:20:55.256 "name": "Nvme0", 00:20:55.256 "trtype": "tcp", 00:20:55.256 "traddr": "10.0.0.3", 00:20:55.256 "adrfam": "ipv4", 00:20:55.256 "trsvcid": "4420", 00:20:55.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:55.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:55.256 "hdgst": false, 00:20:55.256 "ddgst": false 00:20:55.256 }, 00:20:55.256 "method": "bdev_nvme_attach_controller" 00:20:55.256 }' 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:55.256 09:26:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.515 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:55.515 ... 00:20:55.515 fio-3.35 00:20:55.515 Starting 3 threads 00:21:02.101 00:21:02.101 filename0: (groupid=0, jobs=1): err= 0: pid=83723: Tue Oct 8 09:26:52 2024 00:21:02.101 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(184MiB/5005msec) 00:21:02.101 slat (usec): min=5, max=105, avg=14.50, stdev= 8.20 00:21:02.101 clat (usec): min=5838, max=12190, avg=10157.04, stdev=544.73 00:21:02.101 lat (usec): min=5845, max=12240, avg=10171.53, stdev=545.38 00:21:02.101 clat percentiles (usec): 00:21:02.101 | 1.00th=[ 9372], 5.00th=[ 9503], 10.00th=[ 9503], 20.00th=[ 9765], 00:21:02.101 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:21:02.101 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10683], 95.00th=[11076], 00:21:02.101 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12125], 99.95th=[12256], 00:21:02.101 | 99.99th=[12256] 00:21:02.101 bw ( KiB/s): min=36096, max=39168, per=33.08%, avg=37376.00, stdev=1015.97, samples=9 00:21:02.101 iops : min= 282, max= 306, avg=292.00, stdev= 7.94, samples=9 00:21:02.101 lat (msec) : 10=42.09%, 20=57.91% 00:21:02.101 cpu : usr=93.65%, sys=5.80%, ctx=10, majf=0, minf=0 00:21:02.101 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.101 issued rwts: total=1473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.101 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:02.101 filename0: (groupid=0, jobs=1): err= 0: pid=83724: Tue Oct 8 09:26:52 2024 00:21:02.101 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(184MiB/5006msec) 00:21:02.101 slat (nsec): min=6051, max=76346, avg=11865.00, stdev=7743.69 00:21:02.101 clat (usec): min=8948, max=13731, avg=10163.64, stdev=529.91 00:21:02.101 lat (usec): min=8955, max=13763, avg=10175.50, stdev=530.47 00:21:02.101 clat percentiles (usec): 00:21:02.101 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9503], 20.00th=[ 9765], 00:21:02.101 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:21:02.101 | 70.00th=[10421], 80.00th=[10421], 90.00th=[10683], 95.00th=[11076], 00:21:02.101 | 99.00th=[11863], 99.50th=[12125], 99.90th=[13698], 99.95th=[13698], 00:21:02.101 | 99.99th=[13698] 00:21:02.101 bw ( KiB/s): min=36096, max=39168, per=33.08%, avg=37376.00, stdev=1015.97, samples=9 00:21:02.101 iops : min= 282, max= 306, avg=292.00, stdev= 7.94, samples=9 00:21:02.101 lat (msec) : 10=41.82%, 20=58.18% 00:21:02.101 cpu : usr=92.09%, sys=7.35%, ctx=57, majf=0, minf=0 00:21:02.101 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.101 issued rwts: total=1473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.101 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:02.101 filename0: (groupid=0, jobs=1): err= 0: pid=83725: Tue Oct 8 09:26:52 2024 00:21:02.101 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(184MiB/5004msec) 00:21:02.101 slat (nsec): min=6322, max=55440, avg=12816.05, stdev=6269.60 00:21:02.101 clat (usec): min=7049, max=12451, avg=10161.66, stdev=517.15 00:21:02.101 lat (usec): min=7056, max=12463, avg=10174.48, stdev=517.35 00:21:02.101 clat percentiles (usec): 00:21:02.101 | 1.00th=[ 9372], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[ 9765], 00:21:02.101 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:21:02.101 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:21:02.101 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12387], 99.95th=[12387], 00:21:02.101 | 99.99th=[12387] 00:21:02.101 bw ( KiB/s): min=36096, max=39168, per=33.08%, avg=37376.00, stdev=1015.97, samples=9 00:21:02.101 iops : min= 282, max= 306, avg=292.00, stdev= 7.94, samples=9 00:21:02.101 lat (msec) : 10=41.96%, 20=58.04% 00:21:02.101 cpu : usr=92.40%, sys=7.06%, ctx=7, majf=0, minf=0 00:21:02.101 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.101 issued rwts: total=1473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.101 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:02.101 00:21:02.101 Run status group 0 (all jobs): 00:21:02.101 READ: bw=110MiB/s (116MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=552MiB (579MB), run=5004-5006msec 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.101 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 bdev_null0 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 [2024-10-08 09:26:52.775699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 bdev_null1 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 bdev_null2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:02.102 { 00:21:02.102 "params": { 00:21:02.102 "name": "Nvme$subsystem", 00:21:02.102 "trtype": "$TEST_TRANSPORT", 00:21:02.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.102 "adrfam": "ipv4", 00:21:02.102 "trsvcid": "$NVMF_PORT", 00:21:02.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.102 "hdgst": ${hdgst:-false}, 00:21:02.102 "ddgst": ${ddgst:-false} 00:21:02.102 }, 00:21:02.102 "method": "bdev_nvme_attach_controller" 00:21:02.102 } 00:21:02.102 EOF 00:21:02.102 )") 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:02.102 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:02.103 { 00:21:02.103 "params": { 00:21:02.103 "name": "Nvme$subsystem", 00:21:02.103 "trtype": "$TEST_TRANSPORT", 00:21:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.103 "adrfam": "ipv4", 00:21:02.103 "trsvcid": "$NVMF_PORT", 00:21:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.103 "hdgst": ${hdgst:-false}, 00:21:02.103 "ddgst": ${ddgst:-false} 00:21:02.103 }, 00:21:02.103 "method": "bdev_nvme_attach_controller" 00:21:02.103 } 00:21:02.103 EOF 00:21:02.103 )") 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:02.103 { 00:21:02.103 "params": { 00:21:02.103 "name": "Nvme$subsystem", 00:21:02.103 "trtype": "$TEST_TRANSPORT", 00:21:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.103 "adrfam": "ipv4", 00:21:02.103 "trsvcid": "$NVMF_PORT", 00:21:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.103 "hdgst": ${hdgst:-false}, 00:21:02.103 "ddgst": ${ddgst:-false} 00:21:02.103 }, 00:21:02.103 "method": "bdev_nvme_attach_controller" 00:21:02.103 } 00:21:02.103 EOF 00:21:02.103 )") 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:02.103 "params": { 00:21:02.103 "name": "Nvme0", 00:21:02.103 "trtype": "tcp", 00:21:02.103 "traddr": "10.0.0.3", 00:21:02.103 "adrfam": "ipv4", 00:21:02.103 "trsvcid": "4420", 00:21:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:02.103 "hdgst": false, 00:21:02.103 "ddgst": false 00:21:02.103 }, 00:21:02.103 "method": "bdev_nvme_attach_controller" 00:21:02.103 },{ 00:21:02.103 "params": { 00:21:02.103 "name": "Nvme1", 00:21:02.103 "trtype": "tcp", 00:21:02.103 "traddr": "10.0.0.3", 00:21:02.103 "adrfam": "ipv4", 00:21:02.103 "trsvcid": "4420", 00:21:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.103 "hdgst": false, 00:21:02.103 "ddgst": false 00:21:02.103 }, 00:21:02.103 "method": "bdev_nvme_attach_controller" 00:21:02.103 },{ 00:21:02.103 "params": { 00:21:02.103 "name": "Nvme2", 00:21:02.103 "trtype": "tcp", 00:21:02.103 "traddr": "10.0.0.3", 00:21:02.103 "adrfam": "ipv4", 00:21:02.103 "trsvcid": "4420", 00:21:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:02.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:02.103 "hdgst": false, 00:21:02.103 "ddgst": false 00:21:02.103 }, 00:21:02.103 "method": "bdev_nvme_attach_controller" 00:21:02.103 }' 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:02.103 09:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:02.103 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:02.103 ... 00:21:02.103 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:02.103 ... 00:21:02.103 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:02.103 ... 00:21:02.103 fio-3.35 00:21:02.103 Starting 24 threads 00:21:14.316 00:21:14.316 filename0: (groupid=0, jobs=1): err= 0: pid=83820: Tue Oct 8 09:27:03 2024 00:21:14.316 read: IOPS=203, BW=814KiB/s (834kB/s)(8152KiB/10010msec) 00:21:14.316 slat (usec): min=3, max=11045, avg=53.34, stdev=520.99 00:21:14.316 clat (msec): min=11, max=134, avg=78.36, stdev=21.32 00:21:14.316 lat (msec): min=11, max=134, avg=78.41, stdev=21.33 00:21:14.316 clat percentiles (msec): 00:21:14.316 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 61], 00:21:14.316 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 83], 00:21:14.316 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 111], 00:21:14.316 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 134], 99.95th=[ 136], 00:21:14.316 | 99.99th=[ 136] 00:21:14.316 bw ( KiB/s): min= 696, max= 1104, per=4.20%, avg=811.60, stdev=117.76, samples=20 00:21:14.316 iops : min= 174, max= 276, avg=202.90, stdev=29.44, samples=20 00:21:14.316 lat (msec) : 20=0.29%, 50=9.57%, 100=69.19%, 250=20.95% 00:21:14.316 cpu : usr=31.61%, sys=1.35%, ctx=901, majf=0, minf=9 00:21:14.316 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:14.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.316 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.316 issued rwts: total=2038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.316 filename0: (groupid=0, jobs=1): err= 0: pid=83821: Tue Oct 8 09:27:03 2024 00:21:14.316 read: IOPS=214, BW=856KiB/s (877kB/s)(8632KiB/10079msec) 00:21:14.316 slat (usec): min=3, max=5053, avg=25.63, stdev=200.57 00:21:14.316 clat (usec): min=1361, max=141128, avg=74455.17, stdev=29849.23 00:21:14.317 lat (usec): min=1369, max=141137, avg=74480.80, stdev=29853.02 00:21:14.317 clat percentiles (usec): 00:21:14.317 | 1.00th=[ 1532], 5.00th=[ 2835], 10.00th=[ 39584], 20.00th=[ 55837], 00:21:14.317 | 30.00th=[ 63701], 40.00th=[ 68682], 50.00th=[ 72877], 60.00th=[ 83362], 00:21:14.317 | 70.00th=[ 94897], 80.00th=[103285], 90.00th=[108528], 95.00th=[112722], 00:21:14.317 | 99.00th=[126354], 99.50th=[130548], 99.90th=[137364], 99.95th=[139461], 00:21:14.317 | 99.99th=[141558] 00:21:14.317 bw ( KiB/s): min= 616, max= 2290, per=4.43%, avg=856.10, stdev=362.97, samples=20 00:21:14.317 iops : min= 154, max= 572, avg=214.00, stdev=90.64, samples=20 00:21:14.317 lat (msec) : 2=3.57%, 4=2.27%, 10=2.22%, 50=7.92%, 100=60.52% 00:21:14.317 lat (msec) : 250=23.49% 00:21:14.317 cpu : usr=45.11%, sys=1.80%, ctx=1352, majf=0, minf=9 00:21:14.317 IO depths : 1=0.3%, 2=1.3%, 4=4.0%, 8=78.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 complete : 0=0.0%, 4=88.7%, 8=10.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.317 filename0: (groupid=0, jobs=1): err= 0: pid=83822: Tue Oct 8 09:27:03 2024 00:21:14.317 read: IOPS=200, BW=801KiB/s (820kB/s)(8048KiB/10050msec) 00:21:14.317 slat (usec): min=3, max=5030, avg=24.27, stdev=174.64 00:21:14.317 clat (msec): min=31, max=140, avg=79.73, stdev=22.33 00:21:14.317 lat (msec): min=31, max=140, avg=79.76, stdev=22.33 00:21:14.317 clat percentiles (msec): 00:21:14.317 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 59], 00:21:14.317 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 90], 00:21:14.317 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 113], 00:21:14.317 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 138], 00:21:14.317 | 99.99th=[ 142] 00:21:14.317 bw ( KiB/s): min= 632, max= 1104, per=4.13%, avg=798.40, stdev=147.98, samples=20 00:21:14.317 iops : min= 158, max= 276, avg=199.60, stdev=37.00, samples=20 00:21:14.317 lat (msec) : 50=11.48%, 100=65.81%, 250=22.71% 00:21:14.317 cpu : usr=39.57%, sys=1.83%, ctx=1658, majf=0, minf=9 00:21:14.317 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.317 filename0: (groupid=0, jobs=1): err= 0: pid=83823: Tue Oct 8 09:27:03 2024 00:21:14.317 read: IOPS=217, BW=869KiB/s (890kB/s)(8692KiB/10004msec) 00:21:14.317 slat (usec): min=5, max=4051, avg=24.14, stdev=121.64 00:21:14.317 clat (msec): min=2, max=135, avg=73.54, stdev=24.47 00:21:14.317 lat (msec): min=2, max=135, avg=73.57, stdev=24.47 00:21:14.317 clat percentiles (msec): 00:21:14.317 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 53], 00:21:14.317 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 75], 00:21:14.317 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 111], 00:21:14.317 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 131], 00:21:14.317 | 99.99th=[ 136] 00:21:14.317 bw ( KiB/s): min= 712, max= 1128, per=4.40%, avg=849.00, stdev=147.12, samples=19 00:21:14.317 iops : min= 178, max= 282, avg=212.21, stdev=36.81, samples=19 00:21:14.317 lat (msec) : 4=0.28%, 10=2.02%, 20=0.32%, 50=15.83%, 100=63.14% 00:21:14.317 lat (msec) : 250=18.41% 00:21:14.317 cpu : usr=40.17%, sys=1.65%, ctx=1172, majf=0, minf=9 00:21:14.317 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.317 filename0: (groupid=0, jobs=1): err= 0: pid=83824: Tue Oct 8 09:27:03 2024 00:21:14.317 read: IOPS=207, BW=831KiB/s (851kB/s)(8312KiB/10002msec) 00:21:14.317 slat (usec): min=4, max=7283, avg=26.09, stdev=187.73 00:21:14.317 clat (msec): min=7, max=141, avg=76.91, stdev=22.50 00:21:14.317 lat (msec): min=7, max=141, avg=76.94, stdev=22.50 00:21:14.317 clat percentiles (msec): 00:21:14.317 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 59], 00:21:14.317 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 80], 00:21:14.317 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 114], 00:21:14.317 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 133], 99.95th=[ 142], 00:21:14.317 | 99.99th=[ 142] 00:21:14.317 bw ( KiB/s): min= 640, max= 1080, per=4.28%, avg=826.42, stdev=145.31, samples=19 00:21:14.317 iops : min= 160, max= 270, avg=206.58, stdev=36.35, samples=19 00:21:14.317 lat (msec) : 10=0.14%, 20=0.34%, 50=12.13%, 100=68.00%, 250=19.39% 00:21:14.317 cpu : usr=40.49%, sys=1.75%, ctx=1313, majf=0, minf=9 00:21:14.317 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 issued rwts: total=2078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.317 filename0: (groupid=0, jobs=1): err= 0: pid=83825: Tue Oct 8 09:27:03 2024 00:21:14.317 read: IOPS=202, BW=809KiB/s (829kB/s)(8100KiB/10010msec) 00:21:14.317 slat (usec): min=4, max=8004, avg=23.97, stdev=196.63 00:21:14.317 clat (msec): min=16, max=136, avg=78.98, stdev=22.37 00:21:14.317 lat (msec): min=16, max=136, avg=79.01, stdev=22.37 00:21:14.317 clat percentiles (msec): 00:21:14.317 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 61], 00:21:14.317 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 84], 00:21:14.317 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 113], 00:21:14.317 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 136], 00:21:14.317 | 99.99th=[ 138] 00:21:14.317 bw ( KiB/s): min= 632, max= 1072, per=4.17%, avg=806.05, stdev=137.64, samples=20 00:21:14.317 iops : min= 158, max= 268, avg=201.50, stdev=34.43, samples=20 00:21:14.317 lat (msec) : 20=0.30%, 50=9.53%, 100=67.56%, 250=22.62% 00:21:14.317 cpu : usr=41.20%, sys=1.64%, ctx=1367, majf=0, minf=9 00:21:14.317 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.317 filename0: (groupid=0, jobs=1): err= 0: pid=83826: Tue Oct 8 09:27:03 2024 00:21:14.317 read: IOPS=198, BW=793KiB/s (812kB/s)(7972KiB/10055msec) 00:21:14.317 slat (usec): min=3, max=285, avg=16.59, stdev=11.15 00:21:14.317 clat (msec): min=7, max=142, avg=80.50, stdev=23.72 00:21:14.317 lat (msec): min=7, max=142, avg=80.52, stdev=23.72 00:21:14.317 clat percentiles (msec): 00:21:14.317 | 1.00th=[ 9], 5.00th=[ 42], 10.00th=[ 51], 20.00th=[ 61], 00:21:14.317 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 92], 00:21:14.317 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 115], 00:21:14.317 | 99.00th=[ 127], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 144], 00:21:14.317 | 99.99th=[ 144] 00:21:14.317 bw ( KiB/s): min= 616, max= 1152, per=4.11%, avg=793.20, stdev=158.65, samples=20 00:21:14.317 iops : min= 154, max= 288, avg=198.30, stdev=39.66, samples=20 00:21:14.317 lat (msec) : 10=1.61%, 50=8.38%, 100=68.04%, 250=21.98% 00:21:14.317 cpu : usr=35.58%, sys=1.37%, ctx=1020, majf=0, minf=9 00:21:14.317 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.4%, 16=16.8%, 32=0.0%, >=64=0.0% 00:21:14.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.317 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.317 filename0: (groupid=0, jobs=1): err= 0: pid=83827: Tue Oct 8 09:27:03 2024 00:21:14.317 read: IOPS=197, BW=792KiB/s (811kB/s)(7956KiB/10050msec) 00:21:14.317 slat (usec): min=6, max=8049, avg=25.17, stdev=254.68 00:21:14.317 clat (msec): min=35, max=143, avg=80.65, stdev=21.17 00:21:14.317 lat (msec): min=35, max=143, avg=80.67, stdev=21.17 00:21:14.317 clat percentiles (msec): 00:21:14.317 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 61], 00:21:14.317 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 86], 00:21:14.317 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 112], 00:21:14.317 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 142], 99.95th=[ 144], 00:21:14.317 | 99.99th=[ 144] 00:21:14.317 bw ( KiB/s): min= 584, max= 1032, per=4.09%, avg=789.20, stdev=138.50, samples=20 00:21:14.317 iops : min= 146, max= 258, avg=197.30, stdev=34.62, samples=20 00:21:14.317 lat (msec) : 50=8.14%, 100=69.13%, 250=22.72% 00:21:14.317 cpu : usr=34.46%, sys=1.21%, ctx=985, majf=0, minf=9 00:21:14.317 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.318 filename1: (groupid=0, jobs=1): err= 0: pid=83828: Tue Oct 8 09:27:03 2024 00:21:14.318 read: IOPS=200, BW=802KiB/s (821kB/s)(8060KiB/10053msec) 00:21:14.318 slat (usec): min=4, max=8020, avg=25.92, stdev=218.66 00:21:14.318 clat (msec): min=11, max=138, avg=79.54, stdev=21.73 00:21:14.318 lat (msec): min=11, max=138, avg=79.56, stdev=21.73 00:21:14.318 clat percentiles (msec): 00:21:14.318 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 62], 00:21:14.318 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 86], 00:21:14.318 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 113], 00:21:14.318 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 134], 99.95th=[ 140], 00:21:14.318 | 99.99th=[ 140] 00:21:14.318 bw ( KiB/s): min= 608, max= 1136, per=4.15%, avg=802.40, stdev=148.90, samples=20 00:21:14.318 iops : min= 152, max= 284, avg=200.60, stdev=37.23, samples=20 00:21:14.318 lat (msec) : 20=0.79%, 50=9.08%, 100=68.44%, 250=21.69% 00:21:14.318 cpu : usr=41.17%, sys=1.79%, ctx=1287, majf=0, minf=9 00:21:14.318 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=2015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.318 filename1: (groupid=0, jobs=1): err= 0: pid=83829: Tue Oct 8 09:27:03 2024 00:21:14.318 read: IOPS=192, BW=771KiB/s (789kB/s)(7744KiB/10047msec) 00:21:14.318 slat (usec): min=4, max=5003, avg=22.15, stdev=133.20 00:21:14.318 clat (msec): min=27, max=143, avg=82.85, stdev=22.93 00:21:14.318 lat (msec): min=27, max=143, avg=82.87, stdev=22.93 00:21:14.318 clat percentiles (msec): 00:21:14.318 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 61], 00:21:14.318 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 85], 60.00th=[ 94], 00:21:14.318 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 118], 00:21:14.318 | 99.00th=[ 129], 99.50th=[ 134], 99.90th=[ 142], 99.95th=[ 144], 00:21:14.318 | 99.99th=[ 144] 00:21:14.318 bw ( KiB/s): min= 544, max= 1104, per=3.98%, avg=768.00, stdev=155.49, samples=20 00:21:14.318 iops : min= 136, max= 276, avg=192.00, stdev=38.87, samples=20 00:21:14.318 lat (msec) : 50=9.76%, 100=62.45%, 250=27.79% 00:21:14.318 cpu : usr=34.41%, sys=1.47%, ctx=1169, majf=0, minf=9 00:21:14.318 IO depths : 1=0.1%, 2=0.7%, 4=3.0%, 8=79.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.318 filename1: (groupid=0, jobs=1): err= 0: pid=83830: Tue Oct 8 09:27:03 2024 00:21:14.318 read: IOPS=210, BW=844KiB/s (864kB/s)(8440KiB/10003msec) 00:21:14.318 slat (usec): min=4, max=10031, avg=31.88, stdev=329.76 00:21:14.318 clat (msec): min=2, max=132, avg=75.71, stdev=24.05 00:21:14.318 lat (msec): min=3, max=132, avg=75.74, stdev=24.05 00:21:14.318 clat percentiles (msec): 00:21:14.318 | 1.00th=[ 7], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 60], 00:21:14.318 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:21:14.318 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 110], 00:21:14.318 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:21:14.318 | 99.99th=[ 132] 00:21:14.318 bw ( KiB/s): min= 640, max= 1065, per=4.24%, avg=819.95, stdev=132.08, samples=19 00:21:14.318 iops : min= 160, max= 266, avg=204.95, stdev=33.01, samples=19 00:21:14.318 lat (msec) : 4=0.76%, 10=1.80%, 20=0.33%, 50=10.33%, 100=66.78% 00:21:14.318 lat (msec) : 250=20.00% 00:21:14.318 cpu : usr=34.90%, sys=1.34%, ctx=1039, majf=0, minf=9 00:21:14.318 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.318 filename1: (groupid=0, jobs=1): err= 0: pid=83831: Tue Oct 8 09:27:03 2024 00:21:14.318 read: IOPS=206, BW=825KiB/s (844kB/s)(8264KiB/10023msec) 00:21:14.318 slat (usec): min=4, max=8051, avg=47.07, stdev=391.27 00:21:14.318 clat (msec): min=32, max=149, avg=77.33, stdev=21.59 00:21:14.318 lat (msec): min=32, max=149, avg=77.38, stdev=21.59 00:21:14.318 clat percentiles (msec): 00:21:14.318 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 60], 00:21:14.318 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 83], 00:21:14.318 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 112], 00:21:14.318 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 134], 99.95th=[ 150], 00:21:14.318 | 99.99th=[ 150] 00:21:14.318 bw ( KiB/s): min= 656, max= 1120, per=4.26%, avg=822.15, stdev=140.23, samples=20 00:21:14.318 iops : min= 164, max= 280, avg=205.50, stdev=35.04, samples=20 00:21:14.318 lat (msec) : 50=12.49%, 100=67.47%, 250=20.04% 00:21:14.318 cpu : usr=41.60%, sys=1.96%, ctx=1219, majf=0, minf=9 00:21:14.318 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=2066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.318 filename1: (groupid=0, jobs=1): err= 0: pid=83832: Tue Oct 8 09:27:03 2024 00:21:14.318 read: IOPS=204, BW=819KiB/s (838kB/s)(8196KiB/10010msec) 00:21:14.318 slat (usec): min=3, max=8011, avg=32.56, stdev=250.55 00:21:14.318 clat (msec): min=13, max=133, avg=78.03, stdev=22.30 00:21:14.318 lat (msec): min=13, max=133, avg=78.06, stdev=22.30 00:21:14.318 clat percentiles (msec): 00:21:14.318 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 59], 00:21:14.318 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 84], 00:21:14.318 | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 112], 00:21:14.318 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 132], 00:21:14.318 | 99.99th=[ 134] 00:21:14.318 bw ( KiB/s): min= 656, max= 1048, per=4.22%, avg=814.45, stdev=133.38, samples=20 00:21:14.318 iops : min= 164, max= 262, avg=203.60, stdev=33.35, samples=20 00:21:14.318 lat (msec) : 20=0.29%, 50=12.30%, 100=66.47%, 250=20.94% 00:21:14.318 cpu : usr=38.92%, sys=1.31%, ctx=1109, majf=0, minf=9 00:21:14.318 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.318 filename1: (groupid=0, jobs=1): err= 0: pid=83833: Tue Oct 8 09:27:03 2024 00:21:14.318 read: IOPS=203, BW=814KiB/s (834kB/s)(8148KiB/10005msec) 00:21:14.318 slat (usec): min=3, max=8043, avg=40.17, stdev=387.81 00:21:14.318 clat (msec): min=7, max=126, avg=78.40, stdev=21.42 00:21:14.318 lat (msec): min=7, max=126, avg=78.44, stdev=21.42 00:21:14.318 clat percentiles (msec): 00:21:14.318 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:21:14.318 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 84], 00:21:14.318 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 114], 00:21:14.318 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:21:14.318 | 99.99th=[ 128] 00:21:14.318 bw ( KiB/s): min= 688, max= 1024, per=4.19%, avg=809.26, stdev=108.07, samples=19 00:21:14.318 iops : min= 172, max= 256, avg=202.32, stdev=27.02, samples=19 00:21:14.318 lat (msec) : 10=0.15%, 20=0.64%, 50=9.72%, 100=68.63%, 250=20.86% 00:21:14.318 cpu : usr=31.79%, sys=1.17%, ctx=896, majf=0, minf=9 00:21:14.318 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:14.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.318 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.318 filename1: (groupid=0, jobs=1): err= 0: pid=83834: Tue Oct 8 09:27:03 2024 00:21:14.319 read: IOPS=210, BW=843KiB/s (863kB/s)(8428KiB/10002msec) 00:21:14.319 slat (usec): min=3, max=8037, avg=36.45, stdev=297.62 00:21:14.319 clat (usec): min=1529, max=156960, avg=75779.51, stdev=24766.22 00:21:14.319 lat (usec): min=1537, max=156975, avg=75815.95, stdev=24767.19 00:21:14.319 clat percentiles (msec): 00:21:14.319 | 1.00th=[ 4], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 61], 00:21:14.319 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 82], 00:21:14.319 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 111], 00:21:14.319 | 99.00th=[ 118], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 157], 00:21:14.319 | 99.99th=[ 157] 00:21:14.319 bw ( KiB/s): min= 640, max= 1096, per=4.19%, avg=809.37, stdev=118.53, samples=19 00:21:14.319 iops : min= 160, max= 274, avg=202.32, stdev=29.64, samples=19 00:21:14.319 lat (msec) : 2=0.76%, 4=1.09%, 10=1.80%, 20=0.33%, 50=8.59% 00:21:14.319 lat (msec) : 100=67.73%, 250=19.70% 00:21:14.319 cpu : usr=43.04%, sys=1.65%, ctx=1269, majf=0, minf=9 00:21:14.319 IO depths : 1=0.1%, 2=1.5%, 4=5.7%, 8=77.8%, 16=14.9%, 32=0.0%, >=64=0.0% 00:21:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 complete : 0=0.0%, 4=88.3%, 8=10.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.319 filename1: (groupid=0, jobs=1): err= 0: pid=83835: Tue Oct 8 09:27:03 2024 00:21:14.319 read: IOPS=203, BW=814KiB/s (834kB/s)(8156KiB/10018msec) 00:21:14.319 slat (usec): min=3, max=4030, avg=33.89, stdev=215.20 00:21:14.319 clat (msec): min=27, max=137, avg=78.40, stdev=20.47 00:21:14.319 lat (msec): min=27, max=137, avg=78.44, stdev=20.47 00:21:14.319 clat percentiles (msec): 00:21:14.319 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 62], 00:21:14.319 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 82], 00:21:14.319 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 111], 00:21:14.319 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 126], 99.95th=[ 127], 00:21:14.319 | 99.99th=[ 138] 00:21:14.319 bw ( KiB/s): min= 664, max= 1104, per=4.20%, avg=811.60, stdev=111.41, samples=20 00:21:14.319 iops : min= 166, max= 276, avg=202.90, stdev=27.85, samples=20 00:21:14.319 lat (msec) : 50=8.93%, 100=70.57%, 250=20.50% 00:21:14.319 cpu : usr=42.48%, sys=1.62%, ctx=1447, majf=0, minf=9 00:21:14.319 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.319 filename2: (groupid=0, jobs=1): err= 0: pid=83836: Tue Oct 8 09:27:03 2024 00:21:14.319 read: IOPS=197, BW=790KiB/s (809kB/s)(7928KiB/10031msec) 00:21:14.319 slat (usec): min=4, max=8036, avg=32.37, stdev=311.72 00:21:14.319 clat (msec): min=34, max=143, avg=80.78, stdev=20.67 00:21:14.319 lat (msec): min=34, max=143, avg=80.82, stdev=20.66 00:21:14.319 clat percentiles (msec): 00:21:14.319 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 63], 00:21:14.319 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 87], 00:21:14.319 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 112], 00:21:14.319 | 99.00th=[ 128], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 144], 00:21:14.319 | 99.99th=[ 144] 00:21:14.319 bw ( KiB/s): min= 640, max= 1064, per=4.07%, avg=786.40, stdev=121.25, samples=20 00:21:14.319 iops : min= 160, max= 266, avg=196.60, stdev=30.31, samples=20 00:21:14.319 lat (msec) : 50=8.12%, 100=70.18%, 250=21.70% 00:21:14.319 cpu : usr=38.96%, sys=1.48%, ctx=1203, majf=0, minf=9 00:21:14.319 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 complete : 0=0.0%, 4=88.6%, 8=10.3%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 issued rwts: total=1982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.319 filename2: (groupid=0, jobs=1): err= 0: pid=83837: Tue Oct 8 09:27:03 2024 00:21:14.319 read: IOPS=194, BW=778KiB/s (797kB/s)(7820KiB/10050msec) 00:21:14.319 slat (usec): min=6, max=8029, avg=40.66, stdev=406.34 00:21:14.319 clat (msec): min=35, max=140, avg=81.98, stdev=19.92 00:21:14.319 lat (msec): min=35, max=140, avg=82.02, stdev=19.91 00:21:14.319 clat percentiles (msec): 00:21:14.319 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 65], 00:21:14.319 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 87], 00:21:14.319 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 113], 00:21:14.319 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 134], 99.95th=[ 140], 00:21:14.319 | 99.99th=[ 140] 00:21:14.319 bw ( KiB/s): min= 632, max= 1008, per=4.01%, avg=775.60, stdev=117.07, samples=20 00:21:14.319 iops : min= 158, max= 252, avg=193.90, stdev=29.27, samples=20 00:21:14.319 lat (msec) : 50=5.37%, 100=71.87%, 250=22.76% 00:21:14.319 cpu : usr=34.45%, sys=1.14%, ctx=960, majf=0, minf=9 00:21:14.319 IO depths : 1=0.1%, 2=1.4%, 4=5.8%, 8=76.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.319 filename2: (groupid=0, jobs=1): err= 0: pid=83838: Tue Oct 8 09:27:03 2024 00:21:14.319 read: IOPS=186, BW=747KiB/s (765kB/s)(7492KiB/10025msec) 00:21:14.319 slat (usec): min=4, max=4043, avg=28.35, stdev=185.82 00:21:14.319 clat (msec): min=27, max=143, avg=85.46, stdev=18.55 00:21:14.319 lat (msec): min=27, max=143, avg=85.48, stdev=18.55 00:21:14.319 clat percentiles (msec): 00:21:14.319 | 1.00th=[ 46], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 70], 00:21:14.319 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 91], 00:21:14.319 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 114], 00:21:14.319 | 99.00th=[ 129], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 144], 00:21:14.319 | 99.99th=[ 144] 00:21:14.319 bw ( KiB/s): min= 640, max= 897, per=3.84%, avg=742.85, stdev=64.35, samples=20 00:21:14.319 iops : min= 160, max= 224, avg=185.70, stdev=16.06, samples=20 00:21:14.319 lat (msec) : 50=2.03%, 100=71.22%, 250=26.75% 00:21:14.319 cpu : usr=39.66%, sys=1.63%, ctx=1100, majf=0, minf=9 00:21:14.319 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=72.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:21:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.319 filename2: (groupid=0, jobs=1): err= 0: pid=83839: Tue Oct 8 09:27:03 2024 00:21:14.319 read: IOPS=201, BW=806KiB/s (825kB/s)(8116KiB/10072msec) 00:21:14.319 slat (nsec): min=3434, max=96697, avg=18225.90, stdev=10800.76 00:21:14.319 clat (msec): min=6, max=146, avg=79.17, stdev=24.36 00:21:14.319 lat (msec): min=6, max=146, avg=79.19, stdev=24.36 00:21:14.319 clat percentiles (msec): 00:21:14.319 | 1.00th=[ 7], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 61], 00:21:14.319 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 88], 00:21:14.319 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 115], 00:21:14.319 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 140], 00:21:14.319 | 99.99th=[ 146] 00:21:14.319 bw ( KiB/s): min= 616, max= 1269, per=4.16%, avg=804.65, stdev=168.52, samples=20 00:21:14.319 iops : min= 154, max= 317, avg=201.15, stdev=42.09, samples=20 00:21:14.319 lat (msec) : 10=2.27%, 50=9.56%, 100=64.17%, 250=24.00% 00:21:14.319 cpu : usr=31.96%, sys=1.34%, ctx=906, majf=0, minf=9 00:21:14.319 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:14.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.319 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.319 filename2: (groupid=0, jobs=1): err= 0: pid=83840: Tue Oct 8 09:27:03 2024 00:21:14.319 read: IOPS=198, BW=792KiB/s (812kB/s)(7944KiB/10024msec) 00:21:14.319 slat (usec): min=3, max=9036, avg=48.53, stdev=391.46 00:21:14.319 clat (msec): min=35, max=159, avg=80.44, stdev=20.60 00:21:14.319 lat (msec): min=35, max=159, avg=80.49, stdev=20.59 00:21:14.319 clat percentiles (msec): 00:21:14.319 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 64], 00:21:14.319 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:21:14.319 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 113], 00:21:14.319 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 161], 99.95th=[ 161], 00:21:14.319 | 99.99th=[ 161] 00:21:14.319 bw ( KiB/s): min= 656, max= 1072, per=4.09%, avg=790.10, stdev=104.92, samples=20 00:21:14.319 iops : min= 164, max= 268, avg=197.50, stdev=26.20, samples=20 00:21:14.319 lat (msec) : 50=8.16%, 100=70.44%, 250=21.40% 00:21:14.319 cpu : usr=40.32%, sys=1.52%, ctx=1237, majf=0, minf=9 00:21:14.319 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.320 filename2: (groupid=0, jobs=1): err= 0: pid=83841: Tue Oct 8 09:27:03 2024 00:21:14.320 read: IOPS=201, BW=806KiB/s (825kB/s)(8108KiB/10058msec) 00:21:14.320 slat (usec): min=4, max=8022, avg=23.07, stdev=199.41 00:21:14.320 clat (msec): min=7, max=144, avg=79.14, stdev=24.61 00:21:14.320 lat (msec): min=7, max=144, avg=79.16, stdev=24.61 00:21:14.320 clat percentiles (msec): 00:21:14.320 | 1.00th=[ 8], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 58], 00:21:14.320 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 80], 60.00th=[ 90], 00:21:14.320 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 115], 00:21:14.320 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 142], 00:21:14.320 | 99.99th=[ 144] 00:21:14.320 bw ( KiB/s): min= 616, max= 1288, per=4.17%, avg=806.80, stdev=187.53, samples=20 00:21:14.320 iops : min= 154, max= 322, avg=201.70, stdev=46.88, samples=20 00:21:14.320 lat (msec) : 10=1.48%, 20=0.05%, 50=12.68%, 100=62.51%, 250=23.29% 00:21:14.320 cpu : usr=36.11%, sys=1.46%, ctx=1209, majf=0, minf=9 00:21:14.320 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.1%, 16=16.9%, 32=0.0%, >=64=0.0% 00:21:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 issued rwts: total=2027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.320 filename2: (groupid=0, jobs=1): err= 0: pid=83842: Tue Oct 8 09:27:03 2024 00:21:14.320 read: IOPS=199, BW=799KiB/s (818kB/s)(8004KiB/10020msec) 00:21:14.320 slat (usec): min=4, max=8052, avg=28.23, stdev=268.48 00:21:14.320 clat (msec): min=27, max=143, avg=79.96, stdev=21.03 00:21:14.320 lat (msec): min=27, max=143, avg=79.99, stdev=21.04 00:21:14.320 clat percentiles (msec): 00:21:14.320 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 62], 00:21:14.320 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 85], 00:21:14.320 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 112], 00:21:14.320 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 134], 99.95th=[ 144], 00:21:14.320 | 99.99th=[ 144] 00:21:14.320 bw ( KiB/s): min= 656, max= 1072, per=4.12%, avg=796.40, stdev=126.74, samples=20 00:21:14.320 iops : min= 164, max= 268, avg=199.10, stdev=31.69, samples=20 00:21:14.320 lat (msec) : 50=10.04%, 100=68.72%, 250=21.24% 00:21:14.320 cpu : usr=35.09%, sys=1.33%, ctx=1033, majf=0, minf=9 00:21:14.320 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.320 filename2: (groupid=0, jobs=1): err= 0: pid=83843: Tue Oct 8 09:27:03 2024 00:21:14.320 read: IOPS=197, BW=788KiB/s (807kB/s)(7920KiB/10045msec) 00:21:14.320 slat (usec): min=3, max=8024, avg=28.25, stdev=220.49 00:21:14.320 clat (msec): min=35, max=140, avg=80.96, stdev=20.22 00:21:14.320 lat (msec): min=35, max=140, avg=80.99, stdev=20.22 00:21:14.320 clat percentiles (msec): 00:21:14.320 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:21:14.320 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:21:14.320 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 114], 00:21:14.320 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 134], 99.95th=[ 140], 00:21:14.320 | 99.99th=[ 140] 00:21:14.320 bw ( KiB/s): min= 656, max= 1104, per=4.07%, avg=785.60, stdev=118.73, samples=20 00:21:14.320 iops : min= 164, max= 276, avg=196.40, stdev=29.68, samples=20 00:21:14.320 lat (msec) : 50=6.97%, 100=72.07%, 250=20.96% 00:21:14.320 cpu : usr=35.16%, sys=1.27%, ctx=1081, majf=0, minf=9 00:21:14.320 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:14.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 complete : 0=0.0%, 4=88.5%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.320 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:14.320 00:21:14.320 Run status group 0 (all jobs): 00:21:14.320 READ: bw=18.9MiB/s (19.8MB/s), 747KiB/s-869KiB/s (765kB/s-890kB/s), io=190MiB (199MB), run=10002-10079msec 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 bdev_null0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.321 [2024-10-08 09:27:04.231704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.321 bdev_null1 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.321 { 00:21:14.321 "params": { 00:21:14.321 "name": "Nvme$subsystem", 00:21:14.321 "trtype": "$TEST_TRANSPORT", 00:21:14.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.321 "adrfam": "ipv4", 00:21:14.321 "trsvcid": "$NVMF_PORT", 00:21:14.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.321 "hdgst": ${hdgst:-false}, 00:21:14.321 "ddgst": ${ddgst:-false} 00:21:14.321 }, 00:21:14.321 "method": "bdev_nvme_attach_controller" 00:21:14.321 } 00:21:14.321 EOF 00:21:14.321 )") 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.321 { 00:21:14.321 "params": { 00:21:14.321 "name": "Nvme$subsystem", 00:21:14.321 "trtype": "$TEST_TRANSPORT", 00:21:14.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.321 "adrfam": "ipv4", 00:21:14.321 "trsvcid": "$NVMF_PORT", 00:21:14.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.321 "hdgst": ${hdgst:-false}, 00:21:14.321 "ddgst": ${ddgst:-false} 00:21:14.321 }, 00:21:14.321 "method": "bdev_nvme_attach_controller" 00:21:14.321 } 00:21:14.321 EOF 00:21:14.321 )") 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:14.321 "params": { 00:21:14.321 "name": "Nvme0", 00:21:14.321 "trtype": "tcp", 00:21:14.321 "traddr": "10.0.0.3", 00:21:14.321 "adrfam": "ipv4", 00:21:14.321 "trsvcid": "4420", 00:21:14.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:14.321 "hdgst": false, 00:21:14.321 "ddgst": false 00:21:14.321 }, 00:21:14.321 "method": "bdev_nvme_attach_controller" 00:21:14.321 },{ 00:21:14.321 "params": { 00:21:14.321 "name": "Nvme1", 00:21:14.321 "trtype": "tcp", 00:21:14.321 "traddr": "10.0.0.3", 00:21:14.321 "adrfam": "ipv4", 00:21:14.321 "trsvcid": "4420", 00:21:14.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.321 "hdgst": false, 00:21:14.321 "ddgst": false 00:21:14.321 }, 00:21:14.321 "method": "bdev_nvme_attach_controller" 00:21:14.321 }' 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:14.321 09:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:14.321 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:14.321 ... 00:21:14.321 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:14.321 ... 00:21:14.321 fio-3.35 00:21:14.321 Starting 4 threads 00:21:18.527 00:21:18.527 filename0: (groupid=0, jobs=1): err= 0: pid=83983: Tue Oct 8 09:27:10 2024 00:21:18.527 read: IOPS=1861, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5002msec) 00:21:18.527 slat (usec): min=4, max=135, avg=17.10, stdev=10.25 00:21:18.527 clat (usec): min=1170, max=7023, avg=4227.86, stdev=582.96 00:21:18.527 lat (usec): min=1178, max=7036, avg=4244.96, stdev=581.64 00:21:18.527 clat percentiles (usec): 00:21:18.527 | 1.00th=[ 1876], 5.00th=[ 3064], 10.00th=[ 3589], 20.00th=[ 4080], 00:21:18.527 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:21:18.527 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:21:18.527 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6521], 99.95th=[ 6980], 00:21:18.527 | 99.99th=[ 7046] 00:21:18.527 bw ( KiB/s): min=14080, max=17808, per=20.24%, avg=14972.44, stdev=1424.40, samples=9 00:21:18.527 iops : min= 1760, max= 2226, avg=1871.56, stdev=178.05, samples=9 00:21:18.527 lat (msec) : 2=1.32%, 4=15.36%, 10=83.32% 00:21:18.527 cpu : usr=94.56%, sys=4.58%, ctx=10, majf=0, minf=0 00:21:18.527 IO depths : 1=0.4%, 2=22.7%, 4=51.4%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:18.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.527 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.527 issued rwts: total=9310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.527 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:18.527 filename0: (groupid=0, jobs=1): err= 0: pid=83984: Tue Oct 8 09:27:10 2024 00:21:18.527 read: IOPS=2449, BW=19.1MiB/s (20.1MB/s)(95.7MiB/5002msec) 00:21:18.527 slat (usec): min=3, max=100, avg=21.71, stdev=10.97 00:21:18.527 clat (usec): min=1207, max=7004, avg=3213.94, stdev=905.40 00:21:18.527 lat (usec): min=1221, max=7019, avg=3235.65, stdev=905.40 00:21:18.527 clat percentiles (usec): 00:21:18.527 | 1.00th=[ 1844], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2278], 00:21:18.527 | 30.00th=[ 2409], 40.00th=[ 2573], 50.00th=[ 3032], 60.00th=[ 3916], 00:21:18.527 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4424], 00:21:18.527 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5866], 00:21:18.527 | 99.99th=[ 6849] 00:21:18.527 bw ( KiB/s): min=19072, max=19920, per=26.52%, avg=19619.56, stdev=263.85, samples=9 00:21:18.527 iops : min= 2384, max= 2490, avg=2452.44, stdev=32.98, samples=9 00:21:18.527 lat (msec) : 2=2.17%, 4=61.75%, 10=36.08% 00:21:18.527 cpu : usr=93.82%, sys=5.20%, ctx=7, majf=0, minf=1 00:21:18.527 IO depths : 1=0.2%, 2=1.6%, 4=62.9%, 8=35.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:18.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.527 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.527 issued rwts: total=12251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.527 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:18.527 filename1: (groupid=0, jobs=1): err= 0: pid=83985: Tue Oct 8 09:27:10 2024 00:21:18.527 read: IOPS=2491, BW=19.5MiB/s (20.4MB/s)(97.4MiB/5003msec) 00:21:18.527 slat (nsec): min=5849, max=91862, avg=18953.86, stdev=9767.57 00:21:18.527 clat (usec): min=1055, max=6928, avg=3163.76, stdev=927.60 00:21:18.527 lat (usec): min=1073, max=6942, avg=3182.71, stdev=926.08 00:21:18.527 clat percentiles (usec): 00:21:18.527 | 1.00th=[ 1713], 5.00th=[ 1860], 10.00th=[ 1926], 20.00th=[ 2212], 00:21:18.527 | 30.00th=[ 2474], 40.00th=[ 2704], 50.00th=[ 2966], 60.00th=[ 3720], 00:21:18.527 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4424], 00:21:18.527 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5866], 00:21:18.527 | 99.99th=[ 6783] 00:21:18.527 bw ( KiB/s): min=19528, max=21760, per=27.05%, avg=20011.56, stdev=694.10, samples=9 00:21:18.528 iops : min= 2441, max= 2720, avg=2501.44, stdev=86.76, samples=9 00:21:18.528 lat (msec) : 2=13.47%, 4=55.84%, 10=30.70% 00:21:18.528 cpu : usr=94.50%, sys=4.48%, ctx=5, majf=0, minf=0 00:21:18.528 IO depths : 1=0.2%, 2=0.6%, 4=63.4%, 8=35.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:18.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.528 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.528 issued rwts: total=12467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.528 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:18.528 filename1: (groupid=0, jobs=1): err= 0: pid=83986: Tue Oct 8 09:27:10 2024 00:21:18.528 read: IOPS=2446, BW=19.1MiB/s (20.0MB/s)(95.6MiB/5001msec) 00:21:18.528 slat (usec): min=3, max=103, avg=21.61, stdev=11.19 00:21:18.528 clat (usec): min=1106, max=6846, avg=3217.78, stdev=905.67 00:21:18.528 lat (usec): min=1113, max=6859, avg=3239.39, stdev=905.26 00:21:18.528 clat percentiles (usec): 00:21:18.528 | 1.00th=[ 1811], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2278], 00:21:18.528 | 30.00th=[ 2442], 40.00th=[ 2606], 50.00th=[ 3097], 60.00th=[ 3884], 00:21:18.528 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4424], 00:21:18.528 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5997], 99.95th=[ 5997], 00:21:18.528 | 99.99th=[ 6783] 00:21:18.528 bw ( KiB/s): min=18789, max=19920, per=26.47%, avg=19584.56, stdev=343.57, samples=9 00:21:18.528 iops : min= 2348, max= 2490, avg=2448.00, stdev=43.13, samples=9 00:21:18.528 lat (msec) : 2=2.41%, 4=62.52%, 10=35.07% 00:21:18.528 cpu : usr=93.84%, sys=5.14%, ctx=33, majf=0, minf=0 00:21:18.528 IO depths : 1=0.2%, 2=1.7%, 4=62.8%, 8=35.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:18.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.528 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.528 issued rwts: total=12233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.528 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:18.528 00:21:18.528 Run status group 0 (all jobs): 00:21:18.528 READ: bw=72.2MiB/s (75.7MB/s), 14.5MiB/s-19.5MiB/s (15.2MB/s-20.4MB/s), io=361MiB (379MB), run=5001-5003msec 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:18.787 ************************************ 00:21:18.787 END TEST fio_dif_rand_params 00:21:18.787 ************************************ 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.787 00:21:18.787 real 0m23.714s 00:21:18.787 user 2m6.481s 00:21:18.787 sys 0m6.536s 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:18.787 09:27:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:18.787 09:27:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:18.787 09:27:10 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:18.787 09:27:10 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:18.787 09:27:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:18.787 ************************************ 00:21:18.787 START TEST fio_dif_digest 00:21:18.787 ************************************ 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:18.787 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.788 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:19.047 bdev_null0 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:19.047 [2024-10-08 09:27:10.498697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:19.047 { 00:21:19.047 "params": { 00:21:19.047 "name": "Nvme$subsystem", 00:21:19.047 "trtype": "$TEST_TRANSPORT", 00:21:19.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.047 "adrfam": "ipv4", 00:21:19.047 "trsvcid": "$NVMF_PORT", 00:21:19.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.047 "hdgst": ${hdgst:-false}, 00:21:19.047 "ddgst": ${ddgst:-false} 00:21:19.047 }, 00:21:19.047 "method": "bdev_nvme_attach_controller" 00:21:19.047 } 00:21:19.047 EOF 00:21:19.047 )") 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:19.047 "params": { 00:21:19.047 "name": "Nvme0", 00:21:19.047 "trtype": "tcp", 00:21:19.047 "traddr": "10.0.0.3", 00:21:19.047 "adrfam": "ipv4", 00:21:19.047 "trsvcid": "4420", 00:21:19.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:19.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:19.047 "hdgst": true, 00:21:19.047 "ddgst": true 00:21:19.047 }, 00:21:19.047 "method": "bdev_nvme_attach_controller" 00:21:19.047 }' 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:19.047 09:27:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.047 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:19.047 ... 00:21:19.047 fio-3.35 00:21:19.047 Starting 3 threads 00:21:31.287 00:21:31.287 filename0: (groupid=0, jobs=1): err= 0: pid=84096: Tue Oct 8 09:27:21 2024 00:21:31.287 read: IOPS=274, BW=34.4MiB/s (36.0MB/s)(344MiB/10007msec) 00:21:31.287 slat (usec): min=5, max=110, avg=14.22, stdev= 8.25 00:21:31.287 clat (usec): min=7383, max=12425, avg=10877.50, stdev=341.45 00:21:31.287 lat (usec): min=7390, max=12445, avg=10891.72, stdev=341.93 00:21:31.287 clat percentiles (usec): 00:21:31.287 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10683], 20.00th=[10683], 00:21:31.287 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:21:31.287 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:21:31.287 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12387], 99.95th=[12387], 00:21:31.287 | 99.99th=[12387] 00:21:31.287 bw ( KiB/s): min=34560, max=36096, per=33.34%, avg=35195.63, stdev=384.47, samples=19 00:21:31.287 iops : min= 270, max= 282, avg=274.95, stdev= 3.01, samples=19 00:21:31.287 lat (msec) : 10=0.22%, 20=99.78% 00:21:31.287 cpu : usr=94.07%, sys=5.36%, ctx=15, majf=0, minf=0 00:21:31.287 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:31.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.287 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.287 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:31.287 filename0: (groupid=0, jobs=1): err= 0: pid=84097: Tue Oct 8 09:27:21 2024 00:21:31.287 read: IOPS=274, BW=34.4MiB/s (36.0MB/s)(344MiB/10007msec) 00:21:31.287 slat (nsec): min=6411, max=93125, avg=12859.53, stdev=7447.59 00:21:31.287 clat (usec): min=7143, max=13521, avg=10881.44, stdev=362.78 00:21:31.287 lat (usec): min=7150, max=13553, avg=10894.30, stdev=363.31 00:21:31.287 clat percentiles (usec): 00:21:31.287 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10683], 20.00th=[10683], 00:21:31.287 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:21:31.287 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:21:31.287 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13435], 99.95th=[13566], 00:21:31.287 | 99.99th=[13566] 00:21:31.287 bw ( KiB/s): min=34560, max=36096, per=33.34%, avg=35195.63, stdev=461.90, samples=19 00:21:31.287 iops : min= 270, max= 282, avg=274.95, stdev= 3.61, samples=19 00:21:31.287 lat (msec) : 10=0.22%, 20=99.78% 00:21:31.287 cpu : usr=94.48%, sys=4.97%, ctx=15, majf=0, minf=0 00:21:31.287 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:31.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.287 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.287 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:31.288 filename0: (groupid=0, jobs=1): err= 0: pid=84098: Tue Oct 8 09:27:21 2024 00:21:31.288 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(344MiB/10002msec) 00:21:31.288 slat (nsec): min=6185, max=88312, avg=12924.01, stdev=7603.90 00:21:31.288 clat (usec): min=5140, max=12464, avg=10873.45, stdev=365.51 00:21:31.288 lat (usec): min=5149, max=12486, avg=10886.38, stdev=365.85 00:21:31.288 clat percentiles (usec): 00:21:31.288 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10683], 20.00th=[10683], 00:21:31.288 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:21:31.288 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:21:31.288 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12387], 99.95th=[12518], 00:21:31.288 | 99.99th=[12518] 00:21:31.288 bw ( KiB/s): min=34491, max=36096, per=33.39%, avg=35243.53, stdev=360.13, samples=19 00:21:31.288 iops : min= 269, max= 282, avg=275.32, stdev= 2.87, samples=19 00:21:31.288 lat (msec) : 10=0.11%, 20=99.89% 00:21:31.288 cpu : usr=95.18%, sys=4.26%, ctx=14, majf=0, minf=9 00:21:31.288 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:31.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.288 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.288 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:31.288 00:21:31.288 Run status group 0 (all jobs): 00:21:31.288 READ: bw=103MiB/s (108MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.1MB/s), io=1032MiB (1082MB), run=10002-10007msec 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:31.288 ************************************ 00:21:31.288 END TEST fio_dif_digest 00:21:31.288 ************************************ 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.288 00:21:31.288 real 0m11.011s 00:21:31.288 user 0m29.025s 00:21:31.288 sys 0m1.767s 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:31.288 09:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:31.288 09:27:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:31.288 09:27:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.288 rmmod nvme_tcp 00:21:31.288 rmmod nvme_fabrics 00:21:31.288 rmmod nvme_keyring 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 83334 ']' 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 83334 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 83334 ']' 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 83334 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83334 00:21:31.288 killing process with pid 83334 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83334' 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@969 -- # kill 83334 00:21:31.288 09:27:21 nvmf_dif -- common/autotest_common.sh@974 -- # wait 83334 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:21:31.288 09:27:21 nvmf_dif -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:31.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.288 Waiting for block devices as requested 00:21:31.288 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.288 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.288 09:27:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:31.288 09:27:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.288 09:27:22 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:31.288 ************************************ 00:21:31.288 END TEST nvmf_dif 00:21:31.288 ************************************ 00:21:31.288 00:21:31.288 real 1m0.712s 00:21:31.288 user 3m50.397s 00:21:31.288 sys 0m18.252s 00:21:31.288 09:27:22 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:31.288 09:27:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:31.288 09:27:22 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:31.288 09:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:31.288 09:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:31.288 09:27:22 -- common/autotest_common.sh@10 -- # set +x 00:21:31.288 ************************************ 00:21:31.288 START TEST nvmf_abort_qd_sizes 00:21:31.288 ************************************ 00:21:31.288 09:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:31.288 * Looking for test storage... 00:21:31.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:31.288 09:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:31.288 09:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:21:31.288 09:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:31.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.548 --rc genhtml_branch_coverage=1 00:21:31.548 --rc genhtml_function_coverage=1 00:21:31.548 --rc genhtml_legend=1 00:21:31.548 --rc geninfo_all_blocks=1 00:21:31.548 --rc geninfo_unexecuted_blocks=1 00:21:31.548 00:21:31.548 ' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:31.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.548 --rc genhtml_branch_coverage=1 00:21:31.548 --rc genhtml_function_coverage=1 00:21:31.548 --rc genhtml_legend=1 00:21:31.548 --rc geninfo_all_blocks=1 00:21:31.548 --rc geninfo_unexecuted_blocks=1 00:21:31.548 00:21:31.548 ' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:31.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.548 --rc genhtml_branch_coverage=1 00:21:31.548 --rc genhtml_function_coverage=1 00:21:31.548 --rc genhtml_legend=1 00:21:31.548 --rc geninfo_all_blocks=1 00:21:31.548 --rc geninfo_unexecuted_blocks=1 00:21:31.548 00:21:31.548 ' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:31.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.548 --rc genhtml_branch_coverage=1 00:21:31.548 --rc genhtml_function_coverage=1 00:21:31.548 --rc genhtml_legend=1 00:21:31.548 --rc geninfo_all_blocks=1 00:21:31.548 --rc geninfo_unexecuted_blocks=1 00:21:31.548 00:21:31.548 ' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.548 09:27:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.549 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # nvmf_veth_init 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:31.549 Cannot find device "nvmf_init_br" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:31.549 Cannot find device "nvmf_init_br2" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:31.549 Cannot find device "nvmf_tgt_br" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.549 Cannot find device "nvmf_tgt_br2" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:31.549 Cannot find device "nvmf_init_br" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:31.549 Cannot find device "nvmf_init_br2" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:31.549 Cannot find device "nvmf_tgt_br" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:31.549 Cannot find device "nvmf_tgt_br2" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:31.549 Cannot find device "nvmf_br" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:31.549 Cannot find device "nvmf_init_if" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:31.549 Cannot find device "nvmf_init_if2" 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.549 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:31.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:21:31.808 00:21:31.808 --- 10.0.0.3 ping statistics --- 00:21:31.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.808 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:31.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:31.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:21:31.808 00:21:31.808 --- 10.0.0.4 ping statistics --- 00:21:31.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.808 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:31.808 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:21:31.808 00:21:31.808 --- 10.0.0.1 ping statistics --- 00:21:31.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.809 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:31.809 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:31.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:21:31.809 00:21:31.809 --- 10.0.0.2 ping statistics --- 00:21:31.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.809 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:31.809 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.809 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # return 0 00:21:31.809 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:21:31.809 09:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:32.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.746 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:32.746 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:32.746 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.746 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:32.746 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:32.746 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.746 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:32.746 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:32.746 09:27:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=84742 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 84742 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 84742 ']' 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:32.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:32.747 09:27:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:32.747 [2024-10-08 09:27:24.407322] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:21:32.747 [2024-10-08 09:27:24.407422] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.005 [2024-10-08 09:27:24.551792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.005 [2024-10-08 09:27:24.680852] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.005 [2024-10-08 09:27:24.681185] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.005 [2024-10-08 09:27:24.681453] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.005 [2024-10-08 09:27:24.681706] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.005 [2024-10-08 09:27:24.681871] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.005 [2024-10-08 09:27:24.683605] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.005 [2024-10-08 09:27:24.683705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.005 [2024-10-08 09:27:24.683839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.005 [2024-10-08 09:27:24.683842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.264 [2024-10-08 09:27:24.766215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.831 09:27:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:33.831 ************************************ 00:21:33.831 START TEST spdk_target_abort 00:21:33.831 ************************************ 00:21:33.831 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:21:33.831 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:33.831 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:33.831 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.831 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.091 spdk_targetn1 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.091 [2024-10-08 09:27:25.542692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.091 [2024-10-08 09:27:25.570931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:34.091 09:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:37.380 Initializing NVMe Controllers 00:21:37.380 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:37.380 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:37.380 Initialization complete. Launching workers. 00:21:37.380 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11947, failed: 0 00:21:37.380 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1037, failed to submit 10910 00:21:37.380 success 731, unsuccessful 306, failed 0 00:21:37.380 09:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:37.380 09:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:40.667 Initializing NVMe Controllers 00:21:40.667 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:40.667 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:40.667 Initialization complete. Launching workers. 00:21:40.667 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:21:40.667 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1153, failed to submit 7799 00:21:40.667 success 378, unsuccessful 775, failed 0 00:21:40.667 09:27:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:40.667 09:27:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:43.955 Initializing NVMe Controllers 00:21:43.955 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:43.955 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:43.955 Initialization complete. Launching workers. 00:21:43.955 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33247, failed: 0 00:21:43.955 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2244, failed to submit 31003 00:21:43.955 success 530, unsuccessful 1714, failed 0 00:21:43.955 09:27:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:43.955 09:27:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.955 09:27:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:43.955 09:27:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.955 09:27:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:43.955 09:27:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.955 09:27:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84742 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 84742 ']' 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 84742 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84742 00:21:44.523 killing process with pid 84742 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84742' 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 84742 00:21:44.523 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 84742 00:21:44.782 00:21:44.782 real 0m10.859s 00:21:44.782 user 0m43.700s 00:21:44.782 sys 0m2.389s 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:44.782 ************************************ 00:21:44.782 END TEST spdk_target_abort 00:21:44.782 ************************************ 00:21:44.782 09:27:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:44.782 09:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:44.782 09:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:44.782 09:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:44.782 ************************************ 00:21:44.782 START TEST kernel_target_abort 00:21:44.782 ************************************ 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:21:44.782 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:44.783 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:45.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:45.350 Waiting for block devices as requested 00:21:45.350 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:45.350 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:45.350 09:27:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:45.350 No valid GPT data, bailing 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:45.350 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:45.610 No valid GPT data, bailing 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:45.610 No valid GPT data, bailing 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:45.610 No valid GPT data, bailing 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:45.610 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c --hostid=a5ef64a0-86d4-4d8b-af10-05a9f556092c -a 10.0.0.1 -t tcp -s 4420 00:21:45.611 00:21:45.611 Discovery Log Number of Records 2, Generation counter 2 00:21:45.611 =====Discovery Log Entry 0====== 00:21:45.611 trtype: tcp 00:21:45.611 adrfam: ipv4 00:21:45.611 subtype: current discovery subsystem 00:21:45.611 treq: not specified, sq flow control disable supported 00:21:45.611 portid: 1 00:21:45.611 trsvcid: 4420 00:21:45.611 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:45.611 traddr: 10.0.0.1 00:21:45.611 eflags: none 00:21:45.611 sectype: none 00:21:45.611 =====Discovery Log Entry 1====== 00:21:45.611 trtype: tcp 00:21:45.611 adrfam: ipv4 00:21:45.611 subtype: nvme subsystem 00:21:45.611 treq: not specified, sq flow control disable supported 00:21:45.611 portid: 1 00:21:45.611 trsvcid: 4420 00:21:45.611 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:45.611 traddr: 10.0.0.1 00:21:45.611 eflags: none 00:21:45.611 sectype: none 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:45.611 09:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:48.900 Initializing NVMe Controllers 00:21:48.900 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:48.900 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:48.900 Initialization complete. Launching workers. 00:21:48.900 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37612, failed: 0 00:21:48.900 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37612, failed to submit 0 00:21:48.900 success 0, unsuccessful 37612, failed 0 00:21:48.900 09:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:48.900 09:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:52.213 Initializing NVMe Controllers 00:21:52.213 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:52.213 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:52.213 Initialization complete. Launching workers. 00:21:52.213 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82715, failed: 0 00:21:52.213 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36260, failed to submit 46455 00:21:52.213 success 0, unsuccessful 36260, failed 0 00:21:52.213 09:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:52.213 09:27:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:55.501 Initializing NVMe Controllers 00:21:55.501 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:55.501 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:55.501 Initialization complete. Launching workers. 00:21:55.501 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103218, failed: 0 00:21:55.501 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25806, failed to submit 77412 00:21:55.501 success 0, unsuccessful 25806, failed 0 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:21:55.501 09:27:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:56.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:58.605 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:58.605 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:58.605 00:21:58.605 real 0m13.675s 00:21:58.605 user 0m6.018s 00:21:58.605 sys 0m4.944s 00:21:58.605 09:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.605 09:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:58.605 ************************************ 00:21:58.605 END TEST kernel_target_abort 00:21:58.605 ************************************ 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.605 rmmod nvme_tcp 00:21:58.605 rmmod nvme_fabrics 00:21:58.605 rmmod nvme_keyring 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:58.605 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:58.606 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 84742 ']' 00:21:58.606 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 84742 00:21:58.606 09:27:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 84742 ']' 00:21:58.606 09:27:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 84742 00:21:58.606 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84742) - No such process 00:21:58.606 Process with pid 84742 is not found 00:21:58.606 09:27:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 84742 is not found' 00:21:58.606 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:21:58.606 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:58.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:59.122 Waiting for block devices as requested 00:21:59.122 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:59.122 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:59.122 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:59.122 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:59.122 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:59.122 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:21:59.122 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:59.122 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:59.380 09:27:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.380 09:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:59.380 00:21:59.380 real 0m28.193s 00:21:59.380 user 0m51.048s 00:21:59.380 sys 0m8.841s 00:21:59.380 09:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.380 ************************************ 00:21:59.380 END TEST nvmf_abort_qd_sizes 00:21:59.381 09:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:59.381 ************************************ 00:21:59.640 09:27:51 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:59.640 09:27:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:59.640 09:27:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.640 09:27:51 -- common/autotest_common.sh@10 -- # set +x 00:21:59.640 ************************************ 00:21:59.640 START TEST keyring_file 00:21:59.640 ************************************ 00:21:59.640 09:27:51 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:59.640 * Looking for test storage... 00:21:59.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:59.640 09:27:51 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.641 --rc genhtml_branch_coverage=1 00:21:59.641 --rc genhtml_function_coverage=1 00:21:59.641 --rc genhtml_legend=1 00:21:59.641 --rc geninfo_all_blocks=1 00:21:59.641 --rc geninfo_unexecuted_blocks=1 00:21:59.641 00:21:59.641 ' 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.641 --rc genhtml_branch_coverage=1 00:21:59.641 --rc genhtml_function_coverage=1 00:21:59.641 --rc genhtml_legend=1 00:21:59.641 --rc geninfo_all_blocks=1 00:21:59.641 --rc geninfo_unexecuted_blocks=1 00:21:59.641 00:21:59.641 ' 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.641 --rc genhtml_branch_coverage=1 00:21:59.641 --rc genhtml_function_coverage=1 00:21:59.641 --rc genhtml_legend=1 00:21:59.641 --rc geninfo_all_blocks=1 00:21:59.641 --rc geninfo_unexecuted_blocks=1 00:21:59.641 00:21:59.641 ' 00:21:59.641 09:27:51 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.641 --rc genhtml_branch_coverage=1 00:21:59.641 --rc genhtml_function_coverage=1 00:21:59.641 --rc genhtml_legend=1 00:21:59.641 --rc geninfo_all_blocks=1 00:21:59.641 --rc geninfo_unexecuted_blocks=1 00:21:59.641 00:21:59.641 ' 00:21:59.641 09:27:51 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.641 09:27:51 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.641 09:27:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.641 09:27:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.641 09:27:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.641 09:27:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:59.641 09:27:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.641 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:59.641 09:27:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:59.641 09:27:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:59.641 09:27:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:59.641 09:27:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:59.641 09:27:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:59.641 09:27:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5UfjAfNUme 00:21:59.641 09:27:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:21:59.641 09:27:51 keyring_file -- nvmf/common.sh@731 -- # python - 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5UfjAfNUme 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5UfjAfNUme 00:21:59.901 09:27:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5UfjAfNUme 00:21:59.901 09:27:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1sR1S0r6AD 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:59.901 09:27:51 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:59.901 09:27:51 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:21:59.901 09:27:51 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:59.901 09:27:51 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:21:59.901 09:27:51 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:21:59.901 09:27:51 keyring_file -- nvmf/common.sh@731 -- # python - 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1sR1S0r6AD 00:21:59.901 09:27:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1sR1S0r6AD 00:21:59.901 09:27:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.1sR1S0r6AD 00:21:59.901 09:27:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=85663 00:21:59.901 09:27:51 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.901 09:27:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85663 00:21:59.901 09:27:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85663 ']' 00:21:59.901 09:27:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.901 09:27:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.901 09:27:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.901 09:27:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.901 09:27:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:59.901 [2024-10-08 09:27:51.498908] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:21:59.901 [2024-10-08 09:27:51.499010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85663 ] 00:22:00.160 [2024-10-08 09:27:51.635811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.160 [2024-10-08 09:27:51.727287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.160 [2024-10-08 09:27:51.814718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:22:01.097 09:27:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.097 [2024-10-08 09:27:52.537265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.097 null0 00:22:01.097 [2024-10-08 09:27:52.569240] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.097 [2024-10-08 09:27:52.569474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.097 09:27:52 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.097 [2024-10-08 09:27:52.597222] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:01.097 request: 00:22:01.097 { 00:22:01.097 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.097 "secure_channel": false, 00:22:01.097 "listen_address": { 00:22:01.097 "trtype": "tcp", 00:22:01.097 "traddr": "127.0.0.1", 00:22:01.097 "trsvcid": "4420" 00:22:01.097 }, 00:22:01.097 "method": "nvmf_subsystem_add_listener", 00:22:01.097 "req_id": 1 00:22:01.097 } 00:22:01.097 Got JSON-RPC error response 00:22:01.097 response: 00:22:01.097 { 00:22:01.097 "code": -32602, 00:22:01.097 "message": "Invalid parameters" 00:22:01.097 } 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.097 09:27:52 keyring_file -- keyring/file.sh@47 -- # bperfpid=85679 00:22:01.097 09:27:52 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85679 /var/tmp/bperf.sock 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85679 ']' 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.097 09:27:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.098 09:27:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.098 09:27:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.098 09:27:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.098 09:27:52 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:01.098 [2024-10-08 09:27:52.662835] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:22:01.098 [2024-10-08 09:27:52.662930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85679 ] 00:22:01.356 [2024-10-08 09:27:52.805040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.356 [2024-10-08 09:27:52.919797] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.356 [2024-10-08 09:27:52.979945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:01.924 09:27:53 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.924 09:27:53 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:22:01.924 09:27:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:01.924 09:27:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:02.183 09:27:53 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1sR1S0r6AD 00:22:02.183 09:27:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1sR1S0r6AD 00:22:02.442 09:27:54 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:02.442 09:27:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:02.442 09:27:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:02.442 09:27:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.442 09:27:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:03.010 09:27:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5UfjAfNUme == \/\t\m\p\/\t\m\p\.\5\U\f\j\A\f\N\U\m\e ]] 00:22:03.010 09:27:54 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:03.010 09:27:54 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:03.010 09:27:54 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.1sR1S0r6AD == \/\t\m\p\/\t\m\p\.\1\s\R\1\S\0\r\6\A\D ]] 00:22:03.010 09:27:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.010 09:27:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:03.269 09:27:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:03.269 09:27:54 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:03.269 09:27:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:03.269 09:27:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:03.269 09:27:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:03.269 09:27:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.269 09:27:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.527 09:27:55 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:03.527 09:27:55 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:03.527 09:27:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:03.786 [2024-10-08 09:27:55.382137] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.786 nvme0n1 00:22:04.045 09:27:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.045 09:27:55 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:04.045 09:27:55 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:04.045 09:27:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.612 09:27:55 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:04.612 09:27:55 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:04.612 Running I/O for 1 seconds... 00:22:05.548 12911.00 IOPS, 50.43 MiB/s 00:22:05.548 Latency(us) 00:22:05.548 [2024-10-08T09:27:57.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.548 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:05.548 nvme0n1 : 1.01 12971.20 50.67 0.00 0.00 9844.44 3902.37 20375.74 00:22:05.548 [2024-10-08T09:27:57.231Z] =================================================================================================================== 00:22:05.548 [2024-10-08T09:27:57.231Z] Total : 12971.20 50.67 0.00 0.00 9844.44 3902.37 20375.74 00:22:05.548 { 00:22:05.548 "results": [ 00:22:05.548 { 00:22:05.548 "job": "nvme0n1", 00:22:05.548 "core_mask": "0x2", 00:22:05.548 "workload": "randrw", 00:22:05.548 "percentage": 50, 00:22:05.548 "status": "finished", 00:22:05.548 "queue_depth": 128, 00:22:05.548 "io_size": 4096, 00:22:05.548 "runtime": 1.005304, 00:22:05.548 "iops": 12971.200751215552, 00:22:05.548 "mibps": 50.66875293443575, 00:22:05.548 "io_failed": 0, 00:22:05.548 "io_timeout": 0, 00:22:05.548 "avg_latency_us": 9844.442449525934, 00:22:05.548 "min_latency_us": 3902.370909090909, 00:22:05.548 "max_latency_us": 20375.738181818182 00:22:05.548 } 00:22:05.548 ], 00:22:05.548 "core_count": 1 00:22:05.548 } 00:22:05.548 09:27:57 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:05.548 09:27:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:05.807 09:27:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:05.807 09:27:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:05.807 09:27:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:05.807 09:27:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:05.807 09:27:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.807 09:27:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:06.083 09:27:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:06.083 09:27:57 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:06.083 09:27:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.083 09:27:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:06.083 09:27:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.083 09:27:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.083 09:27:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:06.341 09:27:57 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:06.341 09:27:57 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:06.341 09:27:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:06.341 09:27:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:06.341 09:27:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:06.341 09:27:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.341 09:27:57 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:06.341 09:27:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.341 09:27:57 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:06.341 09:27:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:06.599 [2024-10-08 09:27:58.163214] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:06.599 [2024-10-08 09:27:58.163722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb016a0 (107): Transport endpoint is not connected 00:22:06.599 [2024-10-08 09:27:58.164707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb016a0 (9): Bad file descriptor 00:22:06.599 [2024-10-08 09:27:58.165703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.599 [2024-10-08 09:27:58.165835] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:06.599 [2024-10-08 09:27:58.165915] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:06.599 [2024-10-08 09:27:58.166000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.599 request: 00:22:06.599 { 00:22:06.599 "name": "nvme0", 00:22:06.599 "trtype": "tcp", 00:22:06.599 "traddr": "127.0.0.1", 00:22:06.599 "adrfam": "ipv4", 00:22:06.599 "trsvcid": "4420", 00:22:06.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:06.600 "prchk_reftag": false, 00:22:06.600 "prchk_guard": false, 00:22:06.600 "hdgst": false, 00:22:06.600 "ddgst": false, 00:22:06.600 "psk": "key1", 00:22:06.600 "allow_unrecognized_csi": false, 00:22:06.600 "method": "bdev_nvme_attach_controller", 00:22:06.600 "req_id": 1 00:22:06.600 } 00:22:06.600 Got JSON-RPC error response 00:22:06.600 response: 00:22:06.600 { 00:22:06.600 "code": -5, 00:22:06.600 "message": "Input/output error" 00:22:06.600 } 00:22:06.600 09:27:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:06.600 09:27:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.600 09:27:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.600 09:27:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.600 09:27:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:06.600 09:27:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:06.600 09:27:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.600 09:27:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:06.600 09:27:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.600 09:27:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.859 09:27:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:06.859 09:27:58 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:06.859 09:27:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.859 09:27:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:06.859 09:27:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.859 09:27:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:06.859 09:27:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.117 09:27:58 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:07.117 09:27:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:07.117 09:27:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:07.376 09:27:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:07.376 09:27:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:07.944 09:27:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:07.944 09:27:59 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:07.944 09:27:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.944 09:27:59 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:07.944 09:27:59 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.5UfjAfNUme 00:22:07.944 09:27:59 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:07.944 09:27:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:07.944 09:27:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:07.944 09:27:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:07.944 09:27:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.944 09:27:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:07.944 09:27:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.944 09:27:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:07.944 09:27:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:08.203 [2024-10-08 09:27:59.803863] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5UfjAfNUme': 0100660 00:22:08.203 [2024-10-08 09:27:59.804295] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:08.203 request: 00:22:08.203 { 00:22:08.203 "name": "key0", 00:22:08.203 "path": "/tmp/tmp.5UfjAfNUme", 00:22:08.203 "method": "keyring_file_add_key", 00:22:08.203 "req_id": 1 00:22:08.203 } 00:22:08.203 Got JSON-RPC error response 00:22:08.203 response: 00:22:08.203 { 00:22:08.203 "code": -1, 00:22:08.203 "message": "Operation not permitted" 00:22:08.203 } 00:22:08.203 09:27:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:08.203 09:27:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.203 09:27:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.203 09:27:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.203 09:27:59 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.5UfjAfNUme 00:22:08.203 09:27:59 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:08.203 09:27:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5UfjAfNUme 00:22:08.461 09:28:00 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.5UfjAfNUme 00:22:08.461 09:28:00 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:08.461 09:28:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:08.461 09:28:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:08.461 09:28:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:08.461 09:28:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:08.461 09:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.720 09:28:00 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:08.720 09:28:00 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:08.720 09:28:00 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:08.720 09:28:00 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:08.720 09:28:00 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:08.720 09:28:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.720 09:28:00 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:08.720 09:28:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.720 09:28:00 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:08.720 09:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:08.979 [2024-10-08 09:28:00.586002] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5UfjAfNUme': No such file or directory 00:22:08.979 [2024-10-08 09:28:00.586412] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:08.979 [2024-10-08 09:28:00.586547] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:08.979 [2024-10-08 09:28:00.586635] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:22:08.979 [2024-10-08 09:28:00.586721] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:08.979 [2024-10-08 09:28:00.586763] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:08.979 request: 00:22:08.979 { 00:22:08.979 "name": "nvme0", 00:22:08.979 "trtype": "tcp", 00:22:08.979 "traddr": "127.0.0.1", 00:22:08.979 "adrfam": "ipv4", 00:22:08.979 "trsvcid": "4420", 00:22:08.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.979 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:08.979 "prchk_reftag": false, 00:22:08.979 "prchk_guard": false, 00:22:08.979 "hdgst": false, 00:22:08.979 "ddgst": false, 00:22:08.979 "psk": "key0", 00:22:08.979 "allow_unrecognized_csi": false, 00:22:08.979 "method": "bdev_nvme_attach_controller", 00:22:08.979 "req_id": 1 00:22:08.979 } 00:22:08.979 Got JSON-RPC error response 00:22:08.979 response: 00:22:08.979 { 00:22:08.979 "code": -19, 00:22:08.979 "message": "No such device" 00:22:08.979 } 00:22:08.979 09:28:00 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:08.979 09:28:00 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.979 09:28:00 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.979 09:28:00 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.979 09:28:00 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:22:08.979 09:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:09.238 09:28:00 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:09.238 09:28:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:09.238 09:28:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:09.238 09:28:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:09.238 09:28:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:09.238 09:28:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:09.238 09:28:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8aBwBkC1Ac 00:22:09.238 09:28:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:09.238 09:28:00 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:09.238 09:28:00 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:22:09.238 09:28:00 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:09.238 09:28:00 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:09.238 09:28:00 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:22:09.238 09:28:00 keyring_file -- nvmf/common.sh@731 -- # python - 00:22:09.497 09:28:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8aBwBkC1Ac 00:22:09.497 09:28:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8aBwBkC1Ac 00:22:09.497 09:28:00 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.8aBwBkC1Ac 00:22:09.497 09:28:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8aBwBkC1Ac 00:22:09.497 09:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8aBwBkC1Ac 00:22:09.497 09:28:01 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:09.497 09:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:09.756 nvme0n1 00:22:10.015 09:28:01 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:22:10.015 09:28:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:10.015 09:28:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.015 09:28:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.015 09:28:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.015 09:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.273 09:28:01 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:22:10.273 09:28:01 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:22:10.273 09:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:10.532 09:28:01 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:22:10.532 09:28:01 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:22:10.532 09:28:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.532 09:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.532 09:28:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.791 09:28:02 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:22:10.791 09:28:02 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:22:10.791 09:28:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:10.791 09:28:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.791 09:28:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.791 09:28:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.791 09:28:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.050 09:28:02 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:22:11.050 09:28:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:11.050 09:28:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:11.309 09:28:02 keyring_file -- keyring/file.sh@105 -- # jq length 00:22:11.309 09:28:02 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:22:11.309 09:28:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.568 09:28:03 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:22:11.568 09:28:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8aBwBkC1Ac 00:22:11.568 09:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8aBwBkC1Ac 00:22:11.826 09:28:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1sR1S0r6AD 00:22:11.826 09:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1sR1S0r6AD 00:22:12.085 09:28:03 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:12.085 09:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:12.344 nvme0n1 00:22:12.344 09:28:03 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:22:12.344 09:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:12.603 09:28:04 keyring_file -- keyring/file.sh@113 -- # config='{ 00:22:12.603 "subsystems": [ 00:22:12.603 { 00:22:12.603 "subsystem": "keyring", 00:22:12.603 "config": [ 00:22:12.603 { 00:22:12.603 "method": "keyring_file_add_key", 00:22:12.603 "params": { 00:22:12.603 "name": "key0", 00:22:12.603 "path": "/tmp/tmp.8aBwBkC1Ac" 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "keyring_file_add_key", 00:22:12.603 "params": { 00:22:12.603 "name": "key1", 00:22:12.603 "path": "/tmp/tmp.1sR1S0r6AD" 00:22:12.603 } 00:22:12.603 } 00:22:12.603 ] 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "subsystem": "iobuf", 00:22:12.603 "config": [ 00:22:12.603 { 00:22:12.603 "method": "iobuf_set_options", 00:22:12.603 "params": { 00:22:12.603 "small_pool_count": 8192, 00:22:12.603 "large_pool_count": 1024, 00:22:12.603 "small_bufsize": 8192, 00:22:12.603 "large_bufsize": 135168 00:22:12.603 } 00:22:12.603 } 00:22:12.603 ] 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "subsystem": "sock", 00:22:12.603 "config": [ 00:22:12.603 { 00:22:12.603 "method": "sock_set_default_impl", 00:22:12.603 "params": { 00:22:12.603 "impl_name": "uring" 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "sock_impl_set_options", 00:22:12.603 "params": { 00:22:12.603 "impl_name": "ssl", 00:22:12.603 "recv_buf_size": 4096, 00:22:12.603 "send_buf_size": 4096, 00:22:12.603 "enable_recv_pipe": true, 00:22:12.603 "enable_quickack": false, 00:22:12.603 "enable_placement_id": 0, 00:22:12.603 "enable_zerocopy_send_server": true, 00:22:12.603 "enable_zerocopy_send_client": false, 00:22:12.603 "zerocopy_threshold": 0, 00:22:12.603 "tls_version": 0, 00:22:12.603 "enable_ktls": false 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "sock_impl_set_options", 00:22:12.603 "params": { 00:22:12.603 "impl_name": "posix", 00:22:12.603 "recv_buf_size": 2097152, 00:22:12.603 "send_buf_size": 2097152, 00:22:12.603 "enable_recv_pipe": true, 00:22:12.603 "enable_quickack": false, 00:22:12.603 "enable_placement_id": 0, 00:22:12.603 "enable_zerocopy_send_server": true, 00:22:12.603 "enable_zerocopy_send_client": false, 00:22:12.603 "zerocopy_threshold": 0, 00:22:12.603 "tls_version": 0, 00:22:12.603 "enable_ktls": false 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "sock_impl_set_options", 00:22:12.603 "params": { 00:22:12.603 "impl_name": "uring", 00:22:12.603 "recv_buf_size": 2097152, 00:22:12.603 "send_buf_size": 2097152, 00:22:12.603 "enable_recv_pipe": true, 00:22:12.603 "enable_quickack": false, 00:22:12.603 "enable_placement_id": 0, 00:22:12.603 "enable_zerocopy_send_server": false, 00:22:12.603 "enable_zerocopy_send_client": false, 00:22:12.603 "zerocopy_threshold": 0, 00:22:12.603 "tls_version": 0, 00:22:12.603 "enable_ktls": false 00:22:12.603 } 00:22:12.603 } 00:22:12.603 ] 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "subsystem": "vmd", 00:22:12.603 "config": [] 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "subsystem": "accel", 00:22:12.603 "config": [ 00:22:12.603 { 00:22:12.603 "method": "accel_set_options", 00:22:12.603 "params": { 00:22:12.603 "small_cache_size": 128, 00:22:12.603 "large_cache_size": 16, 00:22:12.603 "task_count": 2048, 00:22:12.603 "sequence_count": 2048, 00:22:12.603 "buf_count": 2048 00:22:12.603 } 00:22:12.603 } 00:22:12.603 ] 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "subsystem": "bdev", 00:22:12.603 "config": [ 00:22:12.603 { 00:22:12.603 "method": "bdev_set_options", 00:22:12.603 "params": { 00:22:12.603 "bdev_io_pool_size": 65535, 00:22:12.603 "bdev_io_cache_size": 256, 00:22:12.603 "bdev_auto_examine": true, 00:22:12.603 "iobuf_small_cache_size": 128, 00:22:12.603 "iobuf_large_cache_size": 16 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "bdev_raid_set_options", 00:22:12.603 "params": { 00:22:12.603 "process_window_size_kb": 1024, 00:22:12.603 "process_max_bandwidth_mb_sec": 0 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "bdev_iscsi_set_options", 00:22:12.603 "params": { 00:22:12.603 "timeout_sec": 30 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "bdev_nvme_set_options", 00:22:12.603 "params": { 00:22:12.603 "action_on_timeout": "none", 00:22:12.603 "timeout_us": 0, 00:22:12.603 "timeout_admin_us": 0, 00:22:12.603 "keep_alive_timeout_ms": 10000, 00:22:12.603 "arbitration_burst": 0, 00:22:12.603 "low_priority_weight": 0, 00:22:12.603 "medium_priority_weight": 0, 00:22:12.603 "high_priority_weight": 0, 00:22:12.603 "nvme_adminq_poll_period_us": 10000, 00:22:12.603 "nvme_ioq_poll_period_us": 0, 00:22:12.603 "io_queue_requests": 512, 00:22:12.603 "delay_cmd_submit": true, 00:22:12.603 "transport_retry_count": 4, 00:22:12.603 "bdev_retry_count": 3, 00:22:12.603 "transport_ack_timeout": 0, 00:22:12.603 "ctrlr_loss_timeout_sec": 0, 00:22:12.603 "reconnect_delay_sec": 0, 00:22:12.603 "fast_io_fail_timeout_sec": 0, 00:22:12.603 "disable_auto_failback": false, 00:22:12.603 "generate_uuids": false, 00:22:12.603 "transport_tos": 0, 00:22:12.603 "nvme_error_stat": false, 00:22:12.603 "rdma_srq_size": 0, 00:22:12.603 "io_path_stat": false, 00:22:12.603 "allow_accel_sequence": false, 00:22:12.603 "rdma_max_cq_size": 0, 00:22:12.603 "rdma_cm_event_timeout_ms": 0, 00:22:12.603 "dhchap_digests": [ 00:22:12.603 "sha256", 00:22:12.603 "sha384", 00:22:12.603 "sha512" 00:22:12.603 ], 00:22:12.603 "dhchap_dhgroups": [ 00:22:12.603 "null", 00:22:12.603 "ffdhe2048", 00:22:12.603 "ffdhe3072", 00:22:12.603 "ffdhe4096", 00:22:12.603 "ffdhe6144", 00:22:12.603 "ffdhe8192" 00:22:12.603 ] 00:22:12.603 } 00:22:12.603 }, 00:22:12.603 { 00:22:12.603 "method": "bdev_nvme_attach_controller", 00:22:12.603 "params": { 00:22:12.603 "name": "nvme0", 00:22:12.603 "trtype": "TCP", 00:22:12.603 "adrfam": "IPv4", 00:22:12.603 "traddr": "127.0.0.1", 00:22:12.603 "trsvcid": "4420", 00:22:12.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:12.603 "prchk_reftag": false, 00:22:12.604 "prchk_guard": false, 00:22:12.604 "ctrlr_loss_timeout_sec": 0, 00:22:12.604 "reconnect_delay_sec": 0, 00:22:12.604 "fast_io_fail_timeout_sec": 0, 00:22:12.604 "psk": "key0", 00:22:12.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:12.604 "hdgst": false, 00:22:12.604 "ddgst": false, 00:22:12.604 "multipath": "multipath" 00:22:12.604 } 00:22:12.604 }, 00:22:12.604 { 00:22:12.604 "method": "bdev_nvme_set_hotplug", 00:22:12.604 "params": { 00:22:12.604 "period_us": 100000, 00:22:12.604 "enable": false 00:22:12.604 } 00:22:12.604 }, 00:22:12.604 { 00:22:12.604 "method": "bdev_wait_for_examine" 00:22:12.604 } 00:22:12.604 ] 00:22:12.604 }, 00:22:12.604 { 00:22:12.604 "subsystem": "nbd", 00:22:12.604 "config": [] 00:22:12.604 } 00:22:12.604 ] 00:22:12.604 }' 00:22:12.604 09:28:04 keyring_file -- keyring/file.sh@115 -- # killprocess 85679 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85679 ']' 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85679 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85679 00:22:12.604 killing process with pid 85679 00:22:12.604 Received shutdown signal, test time was about 1.000000 seconds 00:22:12.604 00:22:12.604 Latency(us) 00:22:12.604 [2024-10-08T09:28:04.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.604 [2024-10-08T09:28:04.287Z] =================================================================================================================== 00:22:12.604 [2024-10-08T09:28:04.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85679' 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@969 -- # kill 85679 00:22:12.604 09:28:04 keyring_file -- common/autotest_common.sh@974 -- # wait 85679 00:22:12.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:12.863 09:28:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=85931 00:22:12.863 09:28:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85931 /var/tmp/bperf.sock 00:22:12.863 09:28:04 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85931 ']' 00:22:12.863 09:28:04 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:12.863 09:28:04 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.863 09:28:04 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:12.863 09:28:04 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:12.863 09:28:04 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.863 09:28:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:22:12.863 "subsystems": [ 00:22:12.863 { 00:22:12.863 "subsystem": "keyring", 00:22:12.863 "config": [ 00:22:12.863 { 00:22:12.863 "method": "keyring_file_add_key", 00:22:12.863 "params": { 00:22:12.863 "name": "key0", 00:22:12.863 "path": "/tmp/tmp.8aBwBkC1Ac" 00:22:12.863 } 00:22:12.863 }, 00:22:12.863 { 00:22:12.863 "method": "keyring_file_add_key", 00:22:12.863 "params": { 00:22:12.863 "name": "key1", 00:22:12.863 "path": "/tmp/tmp.1sR1S0r6AD" 00:22:12.863 } 00:22:12.863 } 00:22:12.863 ] 00:22:12.863 }, 00:22:12.863 { 00:22:12.863 "subsystem": "iobuf", 00:22:12.863 "config": [ 00:22:12.863 { 00:22:12.863 "method": "iobuf_set_options", 00:22:12.863 "params": { 00:22:12.863 "small_pool_count": 8192, 00:22:12.863 "large_pool_count": 1024, 00:22:12.863 "small_bufsize": 8192, 00:22:12.863 "large_bufsize": 135168 00:22:12.863 } 00:22:12.863 } 00:22:12.863 ] 00:22:12.863 }, 00:22:12.863 { 00:22:12.863 "subsystem": "sock", 00:22:12.863 "config": [ 00:22:12.863 { 00:22:12.863 "method": "sock_set_default_impl", 00:22:12.863 "params": { 00:22:12.863 "impl_name": "uring" 00:22:12.863 } 00:22:12.863 }, 00:22:12.863 { 00:22:12.863 "method": "sock_impl_set_options", 00:22:12.863 "params": { 00:22:12.863 "impl_name": "ssl", 00:22:12.863 "recv_buf_size": 4096, 00:22:12.863 "send_buf_size": 4096, 00:22:12.863 "enable_recv_pipe": true, 00:22:12.863 "enable_quickack": false, 00:22:12.863 "enable_placement_id": 0, 00:22:12.864 "enable_zerocopy_send_server": true, 00:22:12.864 "enable_zerocopy_send_client": false, 00:22:12.864 "zerocopy_threshold": 0, 00:22:12.864 "tls_version": 0, 00:22:12.864 "enable_ktls": false 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "sock_impl_set_options", 00:22:12.864 "params": { 00:22:12.864 "impl_name": "posix", 00:22:12.864 "recv_buf_size": 2097152, 00:22:12.864 "send_buf_size": 2097152, 00:22:12.864 "enable_recv_pipe": true, 00:22:12.864 "enable_quickack": false, 00:22:12.864 "enable_placement_id": 0, 00:22:12.864 "enable_zerocopy_send_server": true, 00:22:12.864 "enable_zerocopy_send_client": false, 00:22:12.864 "zerocopy_threshold": 0, 00:22:12.864 "tls_version": 0, 00:22:12.864 "enable_ktls": false 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "sock_impl_set_options", 00:22:12.864 "params": { 00:22:12.864 "impl_name": "uring", 00:22:12.864 "recv_buf_size": 2097152, 00:22:12.864 "send_buf_size": 2097152, 00:22:12.864 "enable_recv_pipe": true, 00:22:12.864 "enable_quickack": false, 00:22:12.864 "enable_placement_id": 0, 00:22:12.864 "enable_zerocopy_send_server": false, 00:22:12.864 "enable_zerocopy_send_client": false, 00:22:12.864 "zerocopy_threshold": 0, 00:22:12.864 "tls_version": 0, 00:22:12.864 "enable_ktls": false 00:22:12.864 } 00:22:12.864 } 00:22:12.864 ] 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "subsystem": "vmd", 00:22:12.864 "config": [] 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "subsystem": "accel", 00:22:12.864 "config": [ 00:22:12.864 { 00:22:12.864 "method": "accel_set_options", 00:22:12.864 "params": { 00:22:12.864 "small_cache_size": 128, 00:22:12.864 "large_cache_size": 16, 00:22:12.864 "task_count": 2048, 00:22:12.864 "sequence_count": 2048, 00:22:12.864 "buf_count": 2048 00:22:12.864 } 00:22:12.864 } 00:22:12.864 ] 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "subsystem": "bdev", 00:22:12.864 "config": [ 00:22:12.864 { 00:22:12.864 "method": "bdev_set_options", 00:22:12.864 "params": { 00:22:12.864 "bdev_io_pool_size": 65535, 00:22:12.864 "bdev_io_cache_size": 256, 00:22:12.864 "bdev_auto_examine": true, 00:22:12.864 "iobuf_small_cache_size": 128, 00:22:12.864 "iobuf_large_cache_size": 16 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "bdev_raid_set_options", 00:22:12.864 "params": { 00:22:12.864 "process_window_size_kb": 1024, 00:22:12.864 "process_max_bandwidth_mb_sec": 0 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "bdev_iscsi_set_options", 00:22:12.864 "params": { 00:22:12.864 "timeout_sec": 30 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "bdev_nvme_set_options", 00:22:12.864 "params": { 00:22:12.864 "action_on_timeout": "none", 00:22:12.864 "timeout_us": 0, 00:22:12.864 "timeout_admin_us": 0, 00:22:12.864 "keep_alive_timeout_ms": 10000, 00:22:12.864 "arbitration_burst": 0, 00:22:12.864 "low_priority_weight": 0, 00:22:12.864 "medium_priority_weight": 0, 00:22:12.864 "high_priority_weight": 0, 00:22:12.864 "nvme_adminq_poll_period_us": 10000, 00:22:12.864 "nvme_ioq_poll_period_us": 0, 00:22:12.864 "io_queue_requests": 512, 00:22:12.864 "delay_cmd_submit": true, 00:22:12.864 "transport_retry_count": 4, 00:22:12.864 "bdev_retry_count": 3, 00:22:12.864 "transport_ack_timeout": 0, 00:22:12.864 "ctrlr_loss_timeout_sec": 0, 00:22:12.864 "reconnect_delay_sec": 0, 00:22:12.864 "fast_io_fail_timeout_sec": 0, 00:22:12.864 "disable_auto_failback": false, 00:22:12.864 "generate_uuids": false, 00:22:12.864 "transport_tos": 0, 00:22:12.864 "nvme_error_stat": false, 00:22:12.864 "rdma_srq_size": 0, 00:22:12.864 "io_path_stat": false, 00:22:12.864 "allow_accel_sequence": false, 00:22:12.864 "rdma_max_cq_size": 0, 00:22:12.864 "rdma_cm_event_timeout_ms": 0, 00:22:12.864 "dhchap_digests": [ 00:22:12.864 "sha256", 00:22:12.864 "sha384", 00:22:12.864 "sha512" 00:22:12.864 ], 00:22:12.864 "dhchap_dhgroups": [ 00:22:12.864 "null", 00:22:12.864 "ffdhe2048", 00:22:12.864 "ffdhe3072", 00:22:12.864 "ffdhe4096", 00:22:12.864 "ffdhe6144", 00:22:12.864 "ffdhe8192" 00:22:12.864 ] 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "bdev_nvme_attach_controller", 00:22:12.864 "params": { 00:22:12.864 "name": "nvme0", 00:22:12.864 "trtype": "TCP", 00:22:12.864 "adrfam": "IPv4", 00:22:12.864 "traddr": "127.0.0.1", 00:22:12.864 "trsvcid": "4420", 00:22:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:12.864 "prchk_reftag": false, 00:22:12.864 "prchk_guard": false, 00:22:12.864 "ctrlr_loss_timeout_sec": 0, 00:22:12.864 "reconnect_delay_sec": 0, 00:22:12.864 "fast_io_fail_timeout_sec": 0, 00:22:12.864 "psk": "key0", 00:22:12.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:12.864 "hdgst": false, 00:22:12.864 "ddgst": false, 00:22:12.864 "multipath": "multipath" 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "bdev_nvme_set_hotplug", 00:22:12.864 "params": { 00:22:12.864 "period_us": 100000, 00:22:12.864 "enable": false 00:22:12.864 } 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "method": "bdev_wait_for_examine" 00:22:12.864 } 00:22:12.864 ] 00:22:12.864 }, 00:22:12.864 { 00:22:12.864 "subsystem": "nbd", 00:22:12.864 "config": [] 00:22:12.864 } 00:22:12.864 ] 00:22:12.864 }' 00:22:12.864 09:28:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:12.864 [2024-10-08 09:28:04.513719] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:22:12.864 [2024-10-08 09:28:04.514084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85931 ] 00:22:13.123 [2024-10-08 09:28:04.643194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.123 [2024-10-08 09:28:04.722798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.382 [2024-10-08 09:28:04.878936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:13.382 [2024-10-08 09:28:04.945874] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.950 09:28:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:13.950 09:28:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:22:13.950 09:28:05 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:22:13.950 09:28:05 keyring_file -- keyring/file.sh@121 -- # jq length 00:22:13.950 09:28:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.209 09:28:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:14.209 09:28:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:22:14.209 09:28:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:14.209 09:28:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:14.209 09:28:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:14.209 09:28:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.209 09:28:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.468 09:28:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:22:14.468 09:28:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:22:14.468 09:28:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:14.468 09:28:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:14.468 09:28:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.468 09:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.468 09:28:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:14.727 09:28:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:22:14.727 09:28:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:22:14.727 09:28:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:22:14.727 09:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:14.986 09:28:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:22:14.986 09:28:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:14.986 09:28:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8aBwBkC1Ac /tmp/tmp.1sR1S0r6AD 00:22:14.986 09:28:06 keyring_file -- keyring/file.sh@20 -- # killprocess 85931 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85931 ']' 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85931 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85931 00:22:14.986 killing process with pid 85931 00:22:14.986 Received shutdown signal, test time was about 1.000000 seconds 00:22:14.986 00:22:14.986 Latency(us) 00:22:14.986 [2024-10-08T09:28:06.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.986 [2024-10-08T09:28:06.669Z] =================================================================================================================== 00:22:14.986 [2024-10-08T09:28:06.669Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85931' 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@969 -- # kill 85931 00:22:14.986 09:28:06 keyring_file -- common/autotest_common.sh@974 -- # wait 85931 00:22:15.245 09:28:06 keyring_file -- keyring/file.sh@21 -- # killprocess 85663 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85663 ']' 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85663 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85663 00:22:15.245 killing process with pid 85663 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85663' 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@969 -- # kill 85663 00:22:15.245 09:28:06 keyring_file -- common/autotest_common.sh@974 -- # wait 85663 00:22:15.813 ************************************ 00:22:15.813 END TEST keyring_file 00:22:15.813 ************************************ 00:22:15.813 00:22:15.813 real 0m16.255s 00:22:15.813 user 0m40.023s 00:22:15.813 sys 0m3.118s 00:22:15.813 09:28:07 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.813 09:28:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:15.813 09:28:07 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:22:15.813 09:28:07 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:15.813 09:28:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:15.813 09:28:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.813 09:28:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.813 ************************************ 00:22:15.813 START TEST keyring_linux 00:22:15.813 ************************************ 00:22:15.813 09:28:07 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:15.813 Joined session keyring: 734432357 00:22:15.813 * Looking for test storage... 00:22:15.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:15.813 09:28:07 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:15.813 09:28:07 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:15.813 09:28:07 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:22:16.072 09:28:07 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:16.072 09:28:07 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.072 09:28:07 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:16.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.072 --rc genhtml_branch_coverage=1 00:22:16.072 --rc genhtml_function_coverage=1 00:22:16.072 --rc genhtml_legend=1 00:22:16.072 --rc geninfo_all_blocks=1 00:22:16.072 --rc geninfo_unexecuted_blocks=1 00:22:16.072 00:22:16.072 ' 00:22:16.072 09:28:07 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:16.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.072 --rc genhtml_branch_coverage=1 00:22:16.072 --rc genhtml_function_coverage=1 00:22:16.072 --rc genhtml_legend=1 00:22:16.072 --rc geninfo_all_blocks=1 00:22:16.072 --rc geninfo_unexecuted_blocks=1 00:22:16.072 00:22:16.072 ' 00:22:16.072 09:28:07 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:16.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.072 --rc genhtml_branch_coverage=1 00:22:16.072 --rc genhtml_function_coverage=1 00:22:16.072 --rc genhtml_legend=1 00:22:16.072 --rc geninfo_all_blocks=1 00:22:16.072 --rc geninfo_unexecuted_blocks=1 00:22:16.072 00:22:16.072 ' 00:22:16.072 09:28:07 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:16.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.072 --rc genhtml_branch_coverage=1 00:22:16.072 --rc genhtml_function_coverage=1 00:22:16.072 --rc genhtml_legend=1 00:22:16.072 --rc geninfo_all_blocks=1 00:22:16.072 --rc geninfo_unexecuted_blocks=1 00:22:16.072 00:22:16.072 ' 00:22:16.072 09:28:07 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a5ef64a0-86d4-4d8b-af10-05a9f556092c 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.072 09:28:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.072 09:28:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.072 09:28:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.072 09:28:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.072 09:28:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:16.072 09:28:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.072 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:16.072 09:28:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:16.072 09:28:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:16.072 09:28:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:16.072 09:28:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:16.072 09:28:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:16.072 09:28:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:16.072 09:28:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:16.072 09:28:07 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@731 -- # python - 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:16.073 /tmp/:spdk-test:key0 00:22:16.073 09:28:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:22:16.073 09:28:07 keyring_linux -- nvmf/common.sh@731 -- # python - 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:16.073 09:28:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:16.073 /tmp/:spdk-test:key1 00:22:16.073 09:28:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86058 00:22:16.073 09:28:07 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.073 09:28:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86058 00:22:16.073 09:28:07 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 86058 ']' 00:22:16.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.073 09:28:07 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.073 09:28:07 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.073 09:28:07 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.073 09:28:07 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.073 09:28:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 [2024-10-08 09:28:07.728072] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:22:16.073 [2024-10-08 09:28:07.728167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86058 ] 00:22:16.354 [2024-10-08 09:28:07.860520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.354 [2024-10-08 09:28:07.948982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.622 [2024-10-08 09:28:08.026886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:16.622 09:28:08 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.622 09:28:08 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:16.622 09:28:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:16.622 09:28:08 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.622 09:28:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:16.622 [2024-10-08 09:28:08.257592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.622 null0 00:22:16.622 [2024-10-08 09:28:08.289595] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.622 [2024-10-08 09:28:08.289848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:16.880 09:28:08 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.880 09:28:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:16.880 532065641 00:22:16.880 09:28:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:16.880 829756012 00:22:16.880 09:28:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86068 00:22:16.880 09:28:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86068 /var/tmp/bperf.sock 00:22:16.880 09:28:08 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:16.881 09:28:08 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 86068 ']' 00:22:16.881 09:28:08 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:16.881 09:28:08 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.881 09:28:08 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:16.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:16.881 09:28:08 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.881 09:28:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:16.881 [2024-10-08 09:28:08.360668] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:22:16.881 [2024-10-08 09:28:08.360818] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86068 ] 00:22:16.881 [2024-10-08 09:28:08.488766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.139 [2024-10-08 09:28:08.587165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.139 09:28:08 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.139 09:28:08 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:17.139 09:28:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:17.139 09:28:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:17.397 09:28:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:17.397 09:28:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:17.656 [2024-10-08 09:28:09.237897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:17.656 09:28:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:17.656 09:28:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:17.914 [2024-10-08 09:28:09.497209] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.914 nvme0n1 00:22:17.914 09:28:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:17.914 09:28:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:17.914 09:28:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:17.914 09:28:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:17.914 09:28:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:17.914 09:28:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:18.172 09:28:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:18.172 09:28:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:18.172 09:28:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:18.172 09:28:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:18.172 09:28:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:18.172 09:28:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:18.172 09:28:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:18.432 09:28:10 keyring_linux -- keyring/linux.sh@25 -- # sn=532065641 00:22:18.432 09:28:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:18.432 09:28:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:18.432 09:28:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 532065641 == \5\3\2\0\6\5\6\4\1 ]] 00:22:18.432 09:28:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 532065641 00:22:18.432 09:28:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:18.432 09:28:10 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:18.691 Running I/O for 1 seconds... 00:22:19.626 11846.00 IOPS, 46.27 MiB/s 00:22:19.626 Latency(us) 00:22:19.626 [2024-10-08T09:28:11.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.626 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:19.626 nvme0n1 : 1.01 11883.51 46.42 0.00 0.00 10732.54 6285.50 22163.08 00:22:19.626 [2024-10-08T09:28:11.309Z] =================================================================================================================== 00:22:19.626 [2024-10-08T09:28:11.309Z] Total : 11883.51 46.42 0.00 0.00 10732.54 6285.50 22163.08 00:22:19.626 { 00:22:19.626 "results": [ 00:22:19.626 { 00:22:19.626 "job": "nvme0n1", 00:22:19.626 "core_mask": "0x2", 00:22:19.626 "workload": "randread", 00:22:19.626 "status": "finished", 00:22:19.626 "queue_depth": 128, 00:22:19.626 "io_size": 4096, 00:22:19.626 "runtime": 1.007699, 00:22:19.626 "iops": 11883.508865246467, 00:22:19.626 "mibps": 46.41995650486901, 00:22:19.626 "io_failed": 0, 00:22:19.626 "io_timeout": 0, 00:22:19.626 "avg_latency_us": 10732.544272081988, 00:22:19.626 "min_latency_us": 6285.498181818181, 00:22:19.626 "max_latency_us": 22163.083636363637 00:22:19.626 } 00:22:19.626 ], 00:22:19.626 "core_count": 1 00:22:19.626 } 00:22:19.626 09:28:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:19.626 09:28:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:19.884 09:28:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:19.884 09:28:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:19.884 09:28:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:19.884 09:28:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:19.884 09:28:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:19.884 09:28:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:20.143 09:28:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:20.143 09:28:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:20.143 09:28:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:20.143 09:28:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:20.143 09:28:11 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:22:20.143 09:28:11 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:20.143 09:28:11 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:20.143 09:28:11 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.143 09:28:11 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:20.143 09:28:11 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.143 09:28:11 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:20.143 09:28:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:20.402 [2024-10-08 09:28:11.993436] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:20.402 [2024-10-08 09:28:11.993589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x700aa0 (107): Transport endpoint is not connected 00:22:20.402 [2024-10-08 09:28:11.994579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x700aa0 (9): Bad file descriptor 00:22:20.402 [2024-10-08 09:28:11.995576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:20.402 [2024-10-08 09:28:11.995606] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:20.402 [2024-10-08 09:28:11.995637] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:20.402 [2024-10-08 09:28:11.995649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:20.402 request: 00:22:20.402 { 00:22:20.402 "name": "nvme0", 00:22:20.402 "trtype": "tcp", 00:22:20.402 "traddr": "127.0.0.1", 00:22:20.402 "adrfam": "ipv4", 00:22:20.402 "trsvcid": "4420", 00:22:20.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:20.402 "prchk_reftag": false, 00:22:20.402 "prchk_guard": false, 00:22:20.402 "hdgst": false, 00:22:20.402 "ddgst": false, 00:22:20.402 "psk": ":spdk-test:key1", 00:22:20.402 "allow_unrecognized_csi": false, 00:22:20.402 "method": "bdev_nvme_attach_controller", 00:22:20.402 "req_id": 1 00:22:20.402 } 00:22:20.402 Got JSON-RPC error response 00:22:20.402 response: 00:22:20.402 { 00:22:20.402 "code": -5, 00:22:20.402 "message": "Input/output error" 00:22:20.402 } 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@33 -- # sn=532065641 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 532065641 00:22:20.402 1 links removed 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@33 -- # sn=829756012 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 829756012 00:22:20.402 1 links removed 00:22:20.402 09:28:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86068 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 86068 ']' 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 86068 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86068 00:22:20.402 killing process with pid 86068 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86068' 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 86068 00:22:20.402 Received shutdown signal, test time was about 1.000000 seconds 00:22:20.402 00:22:20.402 Latency(us) 00:22:20.402 [2024-10-08T09:28:12.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.402 [2024-10-08T09:28:12.085Z] =================================================================================================================== 00:22:20.402 [2024-10-08T09:28:12.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.402 09:28:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 86068 00:22:20.970 09:28:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86058 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 86058 ']' 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 86058 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86058 00:22:20.970 killing process with pid 86058 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86058' 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 86058 00:22:20.970 09:28:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 86058 00:22:21.538 00:22:21.538 real 0m5.589s 00:22:21.538 user 0m10.476s 00:22:21.538 sys 0m1.651s 00:22:21.538 09:28:12 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:21.538 ************************************ 00:22:21.538 END TEST keyring_linux 00:22:21.538 ************************************ 00:22:21.538 09:28:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:21.538 09:28:13 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:21.538 09:28:13 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:21.538 09:28:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:21.538 09:28:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:21.538 09:28:13 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:21.538 09:28:13 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:21.538 09:28:13 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:21.538 09:28:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.538 09:28:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.538 09:28:13 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:21.538 09:28:13 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:21.538 09:28:13 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:21.538 09:28:13 -- common/autotest_common.sh@10 -- # set +x 00:22:23.442 INFO: APP EXITING 00:22:23.442 INFO: killing all VMs 00:22:23.442 INFO: killing vhost app 00:22:23.442 INFO: EXIT DONE 00:22:24.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:24.010 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:24.010 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:24.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:24.946 Cleaning 00:22:24.946 Removing: /var/run/dpdk/spdk0/config 00:22:24.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:24.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:24.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:24.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:24.946 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:24.946 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:24.946 Removing: /var/run/dpdk/spdk1/config 00:22:24.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:24.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:24.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:24.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:24.946 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:24.946 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:24.946 Removing: /var/run/dpdk/spdk2/config 00:22:24.946 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:24.946 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:24.946 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:24.946 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:24.946 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:24.946 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:24.946 Removing: /var/run/dpdk/spdk3/config 00:22:24.946 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:24.946 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:24.946 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:24.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:24.947 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:24.947 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:24.947 Removing: /var/run/dpdk/spdk4/config 00:22:24.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:24.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:24.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:24.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:24.947 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:24.947 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:24.947 Removing: /dev/shm/nvmf_trace.0 00:22:24.947 Removing: /dev/shm/spdk_tgt_trace.pid56935 00:22:24.947 Removing: /var/run/dpdk/spdk0 00:22:24.947 Removing: /var/run/dpdk/spdk1 00:22:24.947 Removing: /var/run/dpdk/spdk2 00:22:24.947 Removing: /var/run/dpdk/spdk3 00:22:24.947 Removing: /var/run/dpdk/spdk4 00:22:24.947 Removing: /var/run/dpdk/spdk_pid56782 00:22:24.947 Removing: /var/run/dpdk/spdk_pid56935 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57141 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57228 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57255 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57365 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57383 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57528 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57723 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57874 00:22:24.947 Removing: /var/run/dpdk/spdk_pid57950 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58034 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58134 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58219 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58252 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58282 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58357 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58462 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58914 00:22:24.947 Removing: /var/run/dpdk/spdk_pid58966 00:22:24.947 Removing: /var/run/dpdk/spdk_pid59017 00:22:24.947 Removing: /var/run/dpdk/spdk_pid59033 00:22:24.947 Removing: /var/run/dpdk/spdk_pid59100 00:22:24.947 Removing: /var/run/dpdk/spdk_pid59116 00:22:24.947 Removing: /var/run/dpdk/spdk_pid59183 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59199 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59250 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59268 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59308 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59326 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59462 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59496 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59580 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59914 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59931 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59968 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59976 00:22:25.206 Removing: /var/run/dpdk/spdk_pid59997 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60016 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60035 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60058 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60077 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60085 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60106 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60125 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60144 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60163 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60184 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60192 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60214 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60233 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60254 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60270 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60300 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60319 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60350 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60421 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60449 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60464 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60493 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60502 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60515 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60558 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60571 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60605 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60615 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60624 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60634 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60649 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60658 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60668 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60677 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60706 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60738 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60747 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60776 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60791 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60793 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60839 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60859 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60884 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60893 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60906 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60908 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60921 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60934 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60936 00:22:25.206 Removing: /var/run/dpdk/spdk_pid60949 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61031 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61084 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61202 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61233 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61276 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61296 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61318 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61338 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61370 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61385 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61469 00:22:25.206 Removing: /var/run/dpdk/spdk_pid61490 00:22:25.465 Removing: /var/run/dpdk/spdk_pid61540 00:22:25.465 Removing: /var/run/dpdk/spdk_pid61611 00:22:25.465 Removing: /var/run/dpdk/spdk_pid61672 00:22:25.465 Removing: /var/run/dpdk/spdk_pid61701 00:22:25.465 Removing: /var/run/dpdk/spdk_pid61801 00:22:25.465 Removing: /var/run/dpdk/spdk_pid61849 00:22:25.465 Removing: /var/run/dpdk/spdk_pid61887 00:22:25.465 Removing: /var/run/dpdk/spdk_pid62108 00:22:25.465 Removing: /var/run/dpdk/spdk_pid62211 00:22:25.465 Removing: /var/run/dpdk/spdk_pid62244 00:22:25.466 Removing: /var/run/dpdk/spdk_pid62269 00:22:25.466 Removing: /var/run/dpdk/spdk_pid62307 00:22:25.466 Removing: /var/run/dpdk/spdk_pid62336 00:22:25.466 Removing: /var/run/dpdk/spdk_pid62375 00:22:25.466 Removing: /var/run/dpdk/spdk_pid62407 00:22:25.466 Removing: /var/run/dpdk/spdk_pid62808 00:22:25.466 Removing: /var/run/dpdk/spdk_pid62846 00:22:25.466 Removing: /var/run/dpdk/spdk_pid63197 00:22:25.466 Removing: /var/run/dpdk/spdk_pid63662 00:22:25.466 Removing: /var/run/dpdk/spdk_pid63943 00:22:25.466 Removing: /var/run/dpdk/spdk_pid64854 00:22:25.466 Removing: /var/run/dpdk/spdk_pid65797 00:22:25.466 Removing: /var/run/dpdk/spdk_pid65914 00:22:25.466 Removing: /var/run/dpdk/spdk_pid65982 00:22:25.466 Removing: /var/run/dpdk/spdk_pid67424 00:22:25.466 Removing: /var/run/dpdk/spdk_pid67741 00:22:25.466 Removing: /var/run/dpdk/spdk_pid71460 00:22:25.466 Removing: /var/run/dpdk/spdk_pid71825 00:22:25.466 Removing: /var/run/dpdk/spdk_pid71935 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72075 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72104 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72138 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72159 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72251 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72387 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72545 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72627 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72827 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72897 00:22:25.466 Removing: /var/run/dpdk/spdk_pid72995 00:22:25.466 Removing: /var/run/dpdk/spdk_pid73361 00:22:25.466 Removing: /var/run/dpdk/spdk_pid73779 00:22:25.466 Removing: /var/run/dpdk/spdk_pid73780 00:22:25.466 Removing: /var/run/dpdk/spdk_pid73781 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74059 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74382 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74389 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74714 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74728 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74748 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74773 00:22:25.466 Removing: /var/run/dpdk/spdk_pid74782 00:22:25.466 Removing: /var/run/dpdk/spdk_pid75135 00:22:25.466 Removing: /var/run/dpdk/spdk_pid75184 00:22:25.466 Removing: /var/run/dpdk/spdk_pid75517 00:22:25.466 Removing: /var/run/dpdk/spdk_pid75713 00:22:25.466 Removing: /var/run/dpdk/spdk_pid76154 00:22:25.466 Removing: /var/run/dpdk/spdk_pid76711 00:22:25.466 Removing: /var/run/dpdk/spdk_pid77590 00:22:25.466 Removing: /var/run/dpdk/spdk_pid78235 00:22:25.466 Removing: /var/run/dpdk/spdk_pid78238 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80300 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80360 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80421 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80487 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80595 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80655 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80716 00:22:25.466 Removing: /var/run/dpdk/spdk_pid80776 00:22:25.466 Removing: /var/run/dpdk/spdk_pid81170 00:22:25.466 Removing: /var/run/dpdk/spdk_pid82392 00:22:25.466 Removing: /var/run/dpdk/spdk_pid82539 00:22:25.466 Removing: /var/run/dpdk/spdk_pid82781 00:22:25.466 Removing: /var/run/dpdk/spdk_pid83397 00:22:25.466 Removing: /var/run/dpdk/spdk_pid83556 00:22:25.466 Removing: /var/run/dpdk/spdk_pid83717 00:22:25.466 Removing: /var/run/dpdk/spdk_pid83809 00:22:25.466 Removing: /var/run/dpdk/spdk_pid83969 00:22:25.466 Removing: /var/run/dpdk/spdk_pid84082 00:22:25.724 Removing: /var/run/dpdk/spdk_pid84793 00:22:25.725 Removing: /var/run/dpdk/spdk_pid84828 00:22:25.725 Removing: /var/run/dpdk/spdk_pid84869 00:22:25.725 Removing: /var/run/dpdk/spdk_pid85126 00:22:25.725 Removing: /var/run/dpdk/spdk_pid85157 00:22:25.725 Removing: /var/run/dpdk/spdk_pid85191 00:22:25.725 Removing: /var/run/dpdk/spdk_pid85663 00:22:25.725 Removing: /var/run/dpdk/spdk_pid85679 00:22:25.725 Removing: /var/run/dpdk/spdk_pid85931 00:22:25.725 Removing: /var/run/dpdk/spdk_pid86058 00:22:25.725 Removing: /var/run/dpdk/spdk_pid86068 00:22:25.725 Clean 00:22:25.725 09:28:17 -- common/autotest_common.sh@1451 -- # return 0 00:22:25.725 09:28:17 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:25.725 09:28:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.725 09:28:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.725 09:28:17 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:25.725 09:28:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.725 09:28:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.725 09:28:17 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:25.725 09:28:17 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:25.725 09:28:17 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:25.725 09:28:17 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:25.725 09:28:17 -- spdk/autotest.sh@394 -- # hostname 00:22:25.725 09:28:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:25.984 geninfo: WARNING: invalid characters removed from testname! 00:22:47.917 09:28:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:51.229 09:28:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:53.762 09:28:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:55.667 09:28:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:58.202 09:28:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:00.737 09:28:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:02.642 09:28:54 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:02.642 09:28:54 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:23:02.642 09:28:54 -- common/autotest_common.sh@1681 -- $ lcov --version 00:23:02.642 09:28:54 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:23:02.901 09:28:54 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:23:02.901 09:28:54 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:23:02.901 09:28:54 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:23:02.901 09:28:54 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:23:02.901 09:28:54 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:02.901 09:28:54 -- scripts/common.sh@336 -- $ read -ra ver1 00:23:02.901 09:28:54 -- scripts/common.sh@337 -- $ IFS=.-: 00:23:02.901 09:28:54 -- scripts/common.sh@337 -- $ read -ra ver2 00:23:02.901 09:28:54 -- scripts/common.sh@338 -- $ local 'op=<' 00:23:02.901 09:28:54 -- scripts/common.sh@340 -- $ ver1_l=2 00:23:02.901 09:28:54 -- scripts/common.sh@341 -- $ ver2_l=1 00:23:02.901 09:28:54 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:23:02.901 09:28:54 -- scripts/common.sh@344 -- $ case "$op" in 00:23:02.901 09:28:54 -- scripts/common.sh@345 -- $ : 1 00:23:02.901 09:28:54 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:23:02.901 09:28:54 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.901 09:28:54 -- scripts/common.sh@365 -- $ decimal 1 00:23:02.901 09:28:54 -- scripts/common.sh@353 -- $ local d=1 00:23:02.901 09:28:54 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:02.901 09:28:54 -- scripts/common.sh@355 -- $ echo 1 00:23:02.901 09:28:54 -- scripts/common.sh@365 -- $ ver1[v]=1 00:23:02.901 09:28:54 -- scripts/common.sh@366 -- $ decimal 2 00:23:02.901 09:28:54 -- scripts/common.sh@353 -- $ local d=2 00:23:02.901 09:28:54 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:02.901 09:28:54 -- scripts/common.sh@355 -- $ echo 2 00:23:02.901 09:28:54 -- scripts/common.sh@366 -- $ ver2[v]=2 00:23:02.901 09:28:54 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:23:02.901 09:28:54 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:23:02.901 09:28:54 -- scripts/common.sh@368 -- $ return 0 00:23:02.901 09:28:54 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.901 09:28:54 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:23:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.901 --rc genhtml_branch_coverage=1 00:23:02.901 --rc genhtml_function_coverage=1 00:23:02.901 --rc genhtml_legend=1 00:23:02.901 --rc geninfo_all_blocks=1 00:23:02.901 --rc geninfo_unexecuted_blocks=1 00:23:02.901 00:23:02.901 ' 00:23:02.901 09:28:54 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:23:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.901 --rc genhtml_branch_coverage=1 00:23:02.901 --rc genhtml_function_coverage=1 00:23:02.901 --rc genhtml_legend=1 00:23:02.901 --rc geninfo_all_blocks=1 00:23:02.901 --rc geninfo_unexecuted_blocks=1 00:23:02.901 00:23:02.901 ' 00:23:02.901 09:28:54 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:23:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.901 --rc genhtml_branch_coverage=1 00:23:02.901 --rc genhtml_function_coverage=1 00:23:02.901 --rc genhtml_legend=1 00:23:02.901 --rc geninfo_all_blocks=1 00:23:02.901 --rc geninfo_unexecuted_blocks=1 00:23:02.901 00:23:02.901 ' 00:23:02.901 09:28:54 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:23:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.901 --rc genhtml_branch_coverage=1 00:23:02.901 --rc genhtml_function_coverage=1 00:23:02.901 --rc genhtml_legend=1 00:23:02.901 --rc geninfo_all_blocks=1 00:23:02.901 --rc geninfo_unexecuted_blocks=1 00:23:02.901 00:23:02.901 ' 00:23:02.901 09:28:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.901 09:28:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:23:02.901 09:28:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:02.901 09:28:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.901 09:28:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.901 09:28:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.901 09:28:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.902 09:28:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.902 09:28:54 -- paths/export.sh@5 -- $ export PATH 00:23:02.902 09:28:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.902 09:28:54 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:02.902 09:28:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:23:02.902 09:28:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728379734.XXXXXX 00:23:02.902 09:28:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728379734.JepWs4 00:23:02.902 09:28:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:23:02.902 09:28:54 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:23:02.902 09:28:54 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:02.902 09:28:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:02.902 09:28:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:02.902 09:28:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:23:02.902 09:28:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:23:02.902 09:28:54 -- common/autotest_common.sh@10 -- $ set +x 00:23:02.902 09:28:54 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:02.902 09:28:54 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:23:02.902 09:28:54 -- pm/common@17 -- $ local monitor 00:23:02.902 09:28:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.902 09:28:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:02.902 09:28:54 -- pm/common@25 -- $ sleep 1 00:23:02.902 09:28:54 -- pm/common@21 -- $ date +%s 00:23:02.902 09:28:54 -- pm/common@21 -- $ date +%s 00:23:02.902 09:28:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728379734 00:23:02.902 09:28:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728379734 00:23:02.902 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728379734_collect-cpu-load.pm.log 00:23:02.902 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728379734_collect-vmstat.pm.log 00:23:03.840 09:28:55 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:23:03.840 09:28:55 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:23:03.840 09:28:55 -- spdk/autopackage.sh@14 -- $ timing_finish 00:23:03.840 09:28:55 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:03.840 09:28:55 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:03.840 09:28:55 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:03.840 09:28:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:03.840 09:28:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:03.840 09:28:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:03.840 09:28:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:03.840 09:28:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:03.840 09:28:55 -- pm/common@44 -- $ pid=87800 00:23:03.840 09:28:55 -- pm/common@50 -- $ kill -TERM 87800 00:23:03.840 09:28:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:03.840 09:28:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:03.840 09:28:55 -- pm/common@44 -- $ pid=87802 00:23:03.840 09:28:55 -- pm/common@50 -- $ kill -TERM 87802 00:23:03.840 + [[ -n 5371 ]] 00:23:03.840 + sudo kill 5371 00:23:03.849 [Pipeline] } 00:23:03.864 [Pipeline] // timeout 00:23:03.869 [Pipeline] } 00:23:03.885 [Pipeline] // stage 00:23:03.890 [Pipeline] } 00:23:03.905 [Pipeline] // catchError 00:23:03.914 [Pipeline] stage 00:23:03.917 [Pipeline] { (Stop VM) 00:23:03.929 [Pipeline] sh 00:23:04.211 + vagrant halt 00:23:07.498 ==> default: Halting domain... 00:23:14.080 [Pipeline] sh 00:23:14.363 + vagrant destroy -f 00:23:16.896 ==> default: Removing domain... 00:23:17.164 [Pipeline] sh 00:23:17.444 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:17.454 [Pipeline] } 00:23:17.468 [Pipeline] // stage 00:23:17.473 [Pipeline] } 00:23:17.487 [Pipeline] // dir 00:23:17.492 [Pipeline] } 00:23:17.507 [Pipeline] // wrap 00:23:17.514 [Pipeline] } 00:23:17.526 [Pipeline] // catchError 00:23:17.536 [Pipeline] stage 00:23:17.539 [Pipeline] { (Epilogue) 00:23:17.552 [Pipeline] sh 00:23:17.838 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:23.163 [Pipeline] catchError 00:23:23.165 [Pipeline] { 00:23:23.178 [Pipeline] sh 00:23:23.461 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:23.461 Artifacts sizes are good 00:23:23.470 [Pipeline] } 00:23:23.484 [Pipeline] // catchError 00:23:23.494 [Pipeline] archiveArtifacts 00:23:23.501 Archiving artifacts 00:23:23.629 [Pipeline] cleanWs 00:23:23.641 [WS-CLEANUP] Deleting project workspace... 00:23:23.641 [WS-CLEANUP] Deferred wipeout is used... 00:23:23.647 [WS-CLEANUP] done 00:23:23.649 [Pipeline] } 00:23:23.665 [Pipeline] // stage 00:23:23.670 [Pipeline] } 00:23:23.684 [Pipeline] // node 00:23:23.689 [Pipeline] End of Pipeline 00:23:23.725 Finished: SUCCESS